Software Libraries and Middleware for Exascale Systems

Total Page:16

File Type:pdf, Size:1020Kb

Software Libraries and Middleware for Exascale Systems HPC and AI Middleware for Exascale Systems and Clouds Talk at HPC-AI Advisory Council Japan Conference (January ‘21) by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: [email protected] Follow us on http://www.cse.ohio-state.edu/~panda https://twitter.com/mvapich Increasing Usage of HPC, Big Data and Deep/Machine Learning Big Data (Hadoop, Spark, HPC HBase, (MPI, PGAS, etc.) Memcached, etc.) Convergence of HPC, Big Deep/Machine Data, and Deep/Machine Learning Learning! (TensorFlow, PyTorch, BigDL, cuML, etc.) Increasing Need to Run these applications on the Cloud!! Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 2 Presentation Overview • MVAPICH Project – MPI and PGAS (MVAPICH) Library with CUDA-Awareness • HiDL Project – High-Performance Deep Learning – High-Performance Machine Learning • Optimizations and Deployments in Public Cloud – AWS and Azure • Conclusions Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 3 Overview of the MVAPICH2 Project • High Performance open-source MPI Library • Support for multiple interconnects – InfiniBand, Omni-Path, Ethernet/iWARP, RDMA over Converged Ethernet (RoCE), and AWS EFA • Support for multiple platforms – x86, OpenPOWER, ARM, Xeon-Phi, GPGPUs (NVIDIA and AMD) • Started in 2001, first open-source version demonstrated at SC ‘02 • Used by more than 3,125 organizations in 89 countries • Supports the latest MPI-3.1 standard • More than 1.2 Million downloads from the OSU site • http://mvapich.cse.ohio-state.edu directly • Additional optimized versions for different systems/environments: • Empowering many TOP500 clusters (Nov ‘20 ranking) – MVAPICH2-X (Advanced MPI + PGAS), since 2011 – 4th , 10,649,600-core (Sunway TaihuLight) at NSC, Wuxi, China – MVAPICH2-GDR with support for NVIDIA GPGPUs, since 2014 – 9th, 448, 448 cores (Frontera) at TACC – MVAPICH2-MIC with support for Intel Xeon-Phi, since 2014 – 14th, 391,680 cores (ABCI) in Japan – MVAPICH2-Virt with virtualization support, since 2015 – 21th, 570,020 cores (Nurion) in South Korea and many others – MVAPICH2-EA with support for Energy-Awareness, since 2015 – MVAPICH2-Azure for Azure HPC IB instances, since 2019 • Available with software stacks of many vendors and – MVAPICH2-X-AWS for AWS HPC+EFA instances, since 2019 Linux Distros (RedHat, SuSE, OpenHPC, and Spack) • Tools: • Partner in the 9th ranked TACC Frontera system – OSU MPI Micro-Benchmarks (OMB), since 2003 • Empowering Top500 systems for more than 16 years – OSU InfiniBand Network Analysis and Monitoring (INAM), since 2015 Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 4 Architecture of MVAPICH2 Software Family for HPC and DL/ML High Performance Parallel Programming Models Message Passing Interface PGAS Hybrid --- MPI + X (MPI) (UPC, OpenSHMEM, CAF, UPC++) (MPI + PGAS + OpenMP/Cilk) High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms Point-to- Remote Collectives Energy- I/O and Fault Active Introspection point Job Startup Memory Virtualization Algorithms Awareness Tolerance Messages & Analysis Primitives Access File Systems Support for Modern Networking Technology Support for Modern Multi-/Many-core Architectures (InfiniBand, iWARP, RoCE, Omni-Path, Elastic Fabric Adapter) (Intel-Xeon, OpenPOWER, Xeon-Phi, ARM, NVIDIA GPGPU) Transport Protocols Modern Features Transport Mechanisms Modern Features SR- Multi Shared RC SRD UD DC UMR ODP CMA IVSHMEM XPMEM Optane* NVLink CAPI* IOV Rail Memory * Upcoming Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 5 MVAPICH2 Software Family Requirements Library MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2 Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS MVAPICH2-X with IB, Omni-Path, and RoCE MPI with IB, RoCE & GPU and Support for Deep/Machine Learning MVAPICH2-GDR HPC Cloud with MPI & IB MVAPICH2-Virt Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA MPI Energy Monitoring Tool OEMT InfiniBand Network Analysis and Monitoring OSU INAM Microbenchmarks for Measuring MPI and PGAS Performance OMB Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 6 Startup Performance on TACC Frontera MVAPICH2 2.3.4 Intel MPI 2020 MVAPICH2 2.3.4 Intel MPI 2020 25 2000 20 1500 15 1000 10 14X 46X Time (s) 5 Time (s) 500 0 0 56 112 224 448 896 1792 3584 7168 14336 28672 57344 114688229376 Number of Processes Number of Processes • MPI_Init takes 31 seconds on 229,376 processes on 4,096 nodes • All numbers reported with 56 processes per node New designs available since MVAPICH2-2.3.4 Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 7 Hardware Multicast-aware MPI_Bcast on TACC Frontera 20 1000 Default Multicast Default Multicast 15 750 10 500 Latency (us) 250 5 Latency (us) 1.9X 2X (Nodes=2K, PPN=28) 0 0 2 8 32 128 512 2K 8K 32K 128K 512K Message Size Message Size 15 80 Default Multicast Default Multicast Size=16B, PPN=28 Size=32kB, PPN=28 10 60 1.8X 40 2X 5 Latency (us) Latency (us) 20 0 0 2 4 8 16 32 64 128 256 512 1k 2 4 8 16 32 64 128 256 512 1K Number of Nodes Number of Nodes • MCAST-based designs improve latency of MPI_Bcast by up to 2X at 2,048 nodes • Use MV2_USE_MCAST=1 to enable MCAST-based designs Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 8 Performance of Collectives with SHARP on TACC Frontera 120 1000 100 MVAPICH2-X 80 100 MVAPICH2-X-SHARP 6X 60 40 10 MVAPICH2-X 5X Latency (us) Latency (us) MPI_Reduce MPI_Allreduuce 20 MVAPICH2-X-SHARP (PPN=1, 7861) Nodes = 0 (PPN=1, 7861) Nodes = 1 Message size Message size 250 Optimized SHARP designs in MVAPICH2-X 200 MVAPICH2-X Up to 9X performance improvement with SHARP over MVAPICH2-X default for 1ppn 150 MVAPICH2-X-SHARP MPI_Barrier, 6X for 1ppn MPI_Reduce and 5X for 1ppn MPI_Allreduce MPI_Barrier 100 9X Latency (us) 50 B. Ramesh , K. Suresh , N. Sarkauskas , M. Bayatpour , J. Hashmi , H. Subramoni , and D. K. Panda, Scalable MPI Collectives using 0 SHARP: Large Scale Performance Evaluation on the TACC Frontera 4 8 16 32 64 128 256 512 System, ExaMPI2020 - Workshop on Exascale MPI 2020, Nov 2020. 1024 2048 4096 7861 Number of nodes Optimized Runtime Parameters: MV2_ENABLE_SHARP = 1 Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 9 Support for ARM A64FX with InfiniBand (Ookami) Inter-Node Latency Inter-Node Bidirectional Bandwidth 400 25000 MVAPICH2-X OpenMPI 300 20000 MVAPICH2-X OpenMPI 15000 200 10000 100 Latency (us) 5000 0 0 0 2 8 Bandwidth (MB/s) 1 4 32 2K 8K 16 64 1K 4K 2M 1M 4M 128 512 32K 256 16K 64K 128K 512K 256K Message Size (Bytes) Message Size (Bytes) Intra-Node Latency MPI_Bcast (8 nodes 48 PPN) 600 MVAPICH2-X OpenMPI 500 512 400 MVAPICH2-X OpenMPI 300 64 200 Latency (us) 100 Latency (us) 8 0 0 2 8 1 32 2K 8K 2M 1 4 128 512 32K 16 64 1K 4K 128K 512K 1M 256 16K 64K Message Size (Bytes) 256K Message Size (Bytes) Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 10 Optimized MVAPICH2-GDR with CUDA-Aware MPI Support GPU-GPU Inter-node Latency 30 6000 GPU-GPU Inter-node Bi-Bandwidth 20 4000 10 11X 1.85us 10x 2000 Latency (us) 0 0 1 2 4 8 Bandwidth (MB/s) Bandwidth 0 16 32 64 1K 2K 4K 8K 1 2 4 8 128 256 512 16 32 64 1K 2K 4K Message Size (Bytes) 128 256 512 Message Size (Bytes) MV2-(NO-GDR) MV2-GDR 2.3 MV2-(NO-GDR) MV2-GDR-2.3 4000 GPU-GPU Inter-node Bandwidth 3000 2000 9x MVAPICH2-GDR-2.3 Intel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores 1000 NVIDIA Volta V100 GPU Bandwidth (MB/s) Bandwidth 0 Mellanox Connect-X4 EDR HCA 1 2 4 8 16 32 64 1K 2K 4K CUDA 9.0 128 256 512 Mellanox OFED 4.0 with GPU-Direct-RDMA Message Size (Bytes) MV2-(NO-GDR) MV2-GDR-2.3 Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 11 MVAPICH2-GDR ROCm Support for AMD GPUs Intra-Node Point-to-Point Latency Inter-Node Point-to-Point Latency Allreduce – 64 GPUs (8 nodes, 8 GPUs Per Node) Bcast – 64 GPUs (8 nodes, 8 GPUs Per Node) Corona Cluster - ROCm-3.9.0 (mi50 AMD GPUs) Available with MVAPICH2-GDR 2.3.5 Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 12 Presentation Overview • MVAPICH Project – MPI and PGAS (MVAPICH) Library with CUDA-Awareness • HiDL Project – High-Performance Deep Learning – High-Performance Machine Learning • Optimizations and Deployments in Public Cloud – AWS and Azure • Conclusions Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 13 MVAPICH2 (MPI)-driven Infrastructure for ML/DL Training ML/DL Applications ML/DL Applications TensorFlow PyTorch MXNet PyTorch Horovod Torch.distributed DeepSpeed MVAPICH2 or MVAPICH2-X MVAPICH2-GDR for MVAPICH2 or MVAPICH2-X MVAPICH2-GDR for for CPU Training GPU Training for CPU Training GPU Training More details available from: http://hidl.cse.ohio-state.edu Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 14 Multiple Approaches taken up by OSU • MPI-driven Deep Learning – CPU-based Deep Learning – GPU-based Deep Learning • Out-of-core DNN training • Exploiting Hybrid (Data and Model) Parallelism • Use-Case: AI-Driven Digital Pathology • Accelerating Dask and CuML Applications Network Based Computing Laboratory HPC-AI-Japan (Jan ‘21) 15 Distributed TensorFlow on TACC Frontera (2,048 CPU nodes with 114,688 cores) • Scaled TensorFlow to 2048 nodes on Frontera using MVAPICH2 • MVAPICH2 and IntelMPI give similar performance for DNN training • Report a peak of 260,000 images/sec on 2,048 nodes • On 2048 nodes, ResNet-50 can be trained in 7 minutes! A. Jain, A. A. Awan, H. Subramoni, DK Panda, “Scaling TensorFlow, PyTorch, and MXNet using MVAPICH2 for High-Performance Deep Learning on Frontera”, DLS ’19 (SC ’19 Workshop).
Recommended publications
  • Deep Learning Frameworks | NVIDIA Developer
    4/10/2017 Deep Learning Frameworks | NVIDIA Developer Deep Learning Frameworks The NVIDIA Deep Learning SDK accelerates widely­used deep learning frameworks such as Caffe, CNTK, TensorFlow, Theano and Torch as well as many other deep learning applications. Choose a deep learning framework from the list below, download the supported version of cuDNN and follow the instructions on the framework page to get started. Caffe is a deep learning framework made with expression, speed, and modularity in mind. Caffe is developed by the Berkeley Vision and Learning Center (BVLC), as well as community contributors and is popular for computer vision. Caffe supports cuDNN v5 for GPU acceleration. Supported interfaces: C, C++, Python, MATLAB, Command line interface Learning Resources Deep learning course: Getting Started with the Caffe Framework Blog: Deep Learning for Computer Vision with Caffe and cuDNN Download Caffe Download cuDNN The Microsoft Cognitive Toolkit —previously known as CNTK— is a unified deep­learning toolkit from Microsoft Research that makes it easy to train and combine popular model types across multiple GPUs and servers. Microsoft Cognitive Toolkit implements highly efficient CNN and RNN training for speech, image and text data. Microsoft Cognitive Toolkit supports cuDNN v5.1 for GPU acceleration. Supported interfaces: Python, C++, C# and Command line interface Download CNTK Download cuDNN TensorFlow is a software library for numerical computation using data flow graphs, developed by Google’s Machine Intelligence research organization. TensorFlow supports cuDNN v5.1 for GPU acceleration. Supported interfaces: C++, Python Download TensorFlow Download cuDNN https://developer.nvidia.com/deep­learning­frameworks 1/3 4/10/2017 Deep Learning Frameworks | NVIDIA Developer Theano is a math expression compiler that efficiently defines, optimizes, and evaluates mathematical expressions involving multi­dimensional arrays.
    [Show full text]
  • Comparative Study of Deep Learning Software Frameworks
    Comparative Study of Deep Learning Software Frameworks Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah Research and Technology Center, Robert Bosch LLC {Soheil.Bahrampour, Naveen.Ramakrishnan, fixed-term.Lukas.Schott, Mohak.Shah}@us.bosch.com ABSTRACT such as dropout and weight decay [2]. As the popular- Deep learning methods have resulted in significant perfor- ity of the deep learning methods have increased over the mance improvements in several application domains and as last few years, several deep learning software frameworks such several software frameworks have been developed to have appeared to enable efficient development and imple- facilitate their implementation. This paper presents a com- mentation of these methods. The list of available frame- parative study of five deep learning frameworks, namely works includes, but is not limited to, Caffe, DeepLearning4J, Caffe, Neon, TensorFlow, Theano, and Torch, on three as- deepmat, Eblearn, Neon, PyLearn, TensorFlow, Theano, pects: extensibility, hardware utilization, and speed. The Torch, etc. Different frameworks try to optimize different as- study is performed on several types of deep learning ar- pects of training or deployment of a deep learning algorithm. chitectures and we evaluate the performance of the above For instance, Caffe emphasises ease of use where standard frameworks when employed on a single machine for both layers can be easily configured without hard-coding while (multi-threaded) CPU and GPU (Nvidia Titan X) settings. Theano provides automatic differentiation capabilities which The speed performance metrics used here include the gradi- facilitates flexibility to modify architecture for research and ent computation time, which is important during the train- development. Several of these frameworks have received ing phase of deep networks, and the forward time, which wide attention from the research community and are well- is important from the deployment perspective of trained developed allowing efficient training of deep networks with networks.
    [Show full text]
  • Comparative Study of Caffe, Neon, Theano, and Torch
    Workshop track - ICLR 2016 COMPARATIVE STUDY OF CAFFE,NEON,THEANO, AND TORCH FOR DEEP LEARNING Soheil Bahrampour, Naveen Ramakrishnan, Lukas Schott, Mohak Shah Bosch Research and Technology Center fSoheil.Bahrampour,Naveen.Ramakrishnan, fixed-term.Lukas.Schott,[email protected] ABSTRACT Deep learning methods have resulted in significant performance improvements in several application domains and as such several software frameworks have been developed to facilitate their implementation. This paper presents a comparative study of four deep learning frameworks, namely Caffe, Neon, Theano, and Torch, on three aspects: extensibility, hardware utilization, and speed. The study is per- formed on several types of deep learning architectures and we evaluate the per- formance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings. The speed performance metrics used here include the gradient computation time, which is important dur- ing the training phase of deep networks, and the forward time, which is important from the deployment perspective of trained networks. For convolutional networks, we also report how each of these frameworks support various convolutional algo- rithms and their corresponding performance. From our experiments, we observe that Theano and Torch are the most easily extensible frameworks. We observe that Torch is best suited for any deep architecture on CPU, followed by Theano. It also achieves the best performance on the GPU for large convolutional and fully connected networks, followed closely by Neon. Theano achieves the best perfor- mance on GPU for training and deployment of LSTM networks. Finally Caffe is the easiest for evaluating the performance of standard deep architectures.
    [Show full text]
  • Torch7: a Matlab-Like Environment for Machine Learning
    Torch7: A Matlab-like Environment for Machine Learning Ronan Collobert1 Koray Kavukcuoglu2 Clement´ Farabet3;4 1 Idiap Research Institute 2 NEC Laboratories America Martigny, Switzerland Princeton, NJ, USA 3 Courant Institute of Mathematical Sciences 4 Universite´ Paris-Est New York University, New York, NY, USA Equipe´ A3SI - ESIEE Paris, France Abstract Torch7 is a versatile numeric computing framework and machine learning library that extends Lua. Its goal is to provide a flexible environment to design and train learning machines. Flexibility is obtained via Lua, an extremely lightweight scripting language. High performance is obtained via efficient OpenMP/SSE and CUDA implementations of low-level numeric routines. Torch7 can easily be in- terfaced to third-party software thanks to Lua’s light interface. 1 Torch7 Overview With Torch7, we aim at providing a framework with three main advantages: (1) it should ease the development of numerical algorithms, (2) it should be easily extended (including the use of other libraries), and (3) it should be fast. We found that a scripting (interpreted) language with a good C API appears as a convenient solu- tion to “satisfy” the constraint (2). First, a high-level language makes the process of developing a program simpler and more understandable than a low-level language. Second, if the programming language is interpreted, it becomes also easier to quickly try various ideas in an interactive manner. And finally, assuming a good C API, the scripting language becomes the “glue” between hetero- geneous libraries: different structures of the same concept (coming from different libraries) can be hidden behind a unique structure in the scripting language, while keeping all the functionalities coming from all the different libraries.
    [Show full text]
  • Tensorflow, Theano, Keras, Torch, Caffe Vicky Kalogeiton, Stéphane Lathuilière, Pauline Luc, Thomas Lucas, Konstantin Shmelkov Introduction
    TensorFlow, Theano, Keras, Torch, Caffe Vicky Kalogeiton, Stéphane Lathuilière, Pauline Luc, Thomas Lucas, Konstantin Shmelkov Introduction TensorFlow Google Brain, 2015 (rewritten DistBelief) Theano University of Montréal, 2009 Keras François Chollet, 2015 (now at Google) Torch Facebook AI Research, Twitter, Google DeepMind Caffe Berkeley Vision and Learning Center (BVLC), 2013 Outline 1. Introduction of each framework a. TensorFlow b. Theano c. Keras d. Torch e. Caffe 2. Further comparison a. Code + models b. Community and documentation c. Performance d. Model deployment e. Extra features 3. Which framework to choose when ..? Introduction of each framework TensorFlow architecture 1) Low-level core (C++/CUDA) 2) Simple Python API to define the computational graph 3) High-level API (TF-Learn, TF-Slim, soon Keras…) TensorFlow computational graph - auto-differentiation! - easy multi-GPU/multi-node - native C++ multithreading - device-efficient implementation for most ops - whole pipeline in the graph: data loading, preprocessing, prefetching... TensorBoard TensorFlow development + bleeding edge (GitHub yay!) + division in core and contrib => very quick merging of new hotness + a lot of new related API: CRF, BayesFlow, SparseTensor, audio IO, CTC, seq2seq + so it can easily handle images, videos, audio, text... + if you really need a new native op, you can load a dynamic lib - sometimes contrib stuff disappears or moves - recently introduced bells and whistles are barely documented Presentation of Theano: - Maintained by Montréal University group. - Pioneered the use of a computational graph. - General machine learning tool -> Use of Lasagne and Keras. - Very popular in the research community, but not elsewhere. Falling behind. What is it like to start using Theano? - Read tutorials until you no longer can, then keep going.
    [Show full text]
  • Introduction to Deep Learning Framework 1. Introduction 1.1
    Introduction to Deep Learning Framework 1. Introduction 1.1. Commonly used frameworks The most commonly used frameworks for deep learning include Pytorch, Tensorflow, Keras, caffe, Apache MXnet, etc. PyTorch: open source machine learning library; developed by Facebook AI Rsearch Lab; based on the Torch library; supports Python and C++ interfaces. Tensorflow: open source software library dataflow and differentiable programming; developed by Google brain team; provides stable Python & C APIs. Keras: an open-source neural-network library written in Python; conceived to be an interface; capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Caffe: open source under BSD licence; developed at University of California, Berkeley; written in C++ with a Python interface. Apache MXnet: an open-source deep learning software framework; supports a flexible programming model and multiple programming languages (including C++, Python, Java, Julia, Matlab, JavaScript, Go, R, Scala, Perl, and Wolfram Language.) 1.2. Pytorch 1.2.1 Data Tensor: the major computation unit in PyTorch. Tensor could be viewed as the extension of vector (one-dimensional) and matrix (two-dimensional), which could be defined with any dimension. Variable: a wrapper of tensor, which includes creator, value of variable (tensor), and gradient. This is the core of the automatic derivation in Pytorch, as it has the information of both the value and the creator, which is very important for current backward process. Parameter: a subset of variable 1.2.2. Functions: NNModules: NNModules (torch.nn) is a combination of parameters and functions, and could be interpreted as layers. There some common modules such as convolution layers, linear layers, pooling layers, dropout layers, etc.
    [Show full text]
  • Enhancing OMB with Python Benchmarks Presentation at MUG ‘21
    MVAPICH2 Meets Python: Enhancing OMB with Python Benchmarks Presentation at MUG ‘2 Nawras Alnaasan Network-Based Computing Laboratory Dept. of Computer Science and Engineering !he #hio State $ni%ersity alnaasan.&'osu.edu O$tline • #S$ (icro Benc"marks )#(B* • +"y ,yt"on- • (,. for ,yt"on • .ntroducing #(B-,y • .nitial /esults • Conclusion !etwork Base" Com#$ting %a&oratory MUG ‘2 2 O)U Micro Benchmarks (OMB+ • #(B is a widely used package to measure performance of (,. operations. • .t helps in optimi0ing (,I applications on different HPC systems. • #1ers a series of benchmarks including but not limited to point-to-point, blocking and non-blocking collecti%es, and one-sided tests. • A large set of options and flags for users to create customi0able tests. • Supports different platforms like /#Cm/CUDA to run on AR(4N5IDIA GPUs. • +ritten in C !etwork Base" Com#$ting %a&oratory MUG ‘2 ( -hy Python? • Second most popular programming language according to the !.#BE inde7 as of August 8021. Adopted from= "ttps=44www.tiobe.com4tiobe-inde74 • 6reat community and support around (achine Learning Big Data and Cloud Computing – (any deep learning frameworks like ,yTorch, !ensor3ow :eras etc. – #ne of the most important tools for data science and analytics. • 6reat ,otential to use (,. and distributed computing tec"niques. !etwork• (any Base" other Com#$ting ,yt"on libraries and frameworks like Sci,y Numpy D<ango etc. %a&oratory MUG ‘2 , -hy Python? 5ery flexible and simplified synta7. Adopted from= !etwork Base" Com#$ting "ttps:44www."ardikp.com489&>4&84?94pyt"on-cpp4 %a&oratory MUG ‘2 / MPI for Python • (,.
    [Show full text]
  • DLL: a Fast Deep Neural Network Library
    Published in Artificial Neural Networks in Pattern Recognition. ANNPR 2018, Lecture Notes in Computer Science, vol. 11081, which should be cited to refer to this work. DOI: https://doi.org/10.1007/978-3-319-99978-4_4 DLL: A Fast Deep Neural Network Library Baptiste Wicht, Andreas Fischer, and Jean Hennebert 1 HES-SO, University of Applied Science of Western Switzerland 2 University of Fribourg, Switzerland Abstract. Deep Learning Library (DLL) is a library for machine learn- ing with deep neural networks that focuses on speed. It supports feed- forward neural networks such as fully-connected Artificial Neural Net- works (ANNs) and Convolutional Neural Networks (CNNs). Our main motivation for this work was to propose and evaluate novel software engineering strategies with potential to accelerate runtime for training and inference. Such strategies are mostly independent of the underlying deep learning algorithms. On three different datasets and for four dif- ferent neural network models, we compared DLL to five popular deep learning libraries. Experimentally, it is shown that the proposed library is systematically and significantly faster on CPU and GPU. In terms of classification performance, similar accuracies as the other libraries are reported. 1 Introduction In recent years, neural networks have regained a large deal of attention with deep learning approaches. Such approaches rely on the use of bigger and deeper networks, typically by using larger input dimensions to incorporate more con- text and by increasing the number of layers to extract information at different levels of granularity. The success of deep learning can be attributed mainly to three factors. First, there is the advent of big data, meaning the availability of larger quantities of training data.
    [Show full text]
  • Neural Networks and Deep Learning Deep Learning Software Midterm History of Frameworks
    CSCI 5922 - NEURAL NETWORKS AND DEEP LEARNING DEEP LEARNING SOFTWARE MIDTERM HISTORY OF FRAMEWORKS First frameworks to support define-by-run automatic SECOND GENERATION differentiation. FIRST GENERATION Framework Year Framework Year Autograd 2015 Torch 2002 Chainer 2015 Theano 2010 MXNet 2015 Torch7 2011 TensorFlow 2016 Theano 2012 Theano 2016 Pylearn2 2013 DyNet 2017 Keras 2015 PyTorch 2017 Torchnet 2015 Ignite 2018 TensorFlow 2019 Wrappers First full frameworks to support CUDA/GPUs. TORCH - 2002 - HTTP://TORCH.CH While this version of Torch pre-dates deep learning, it is the prototype for contemporary machine learning frameworks. Most contemporary deep learning frameworks, consciously or not, mimic Torch. Torch is written in C++ and supports machine learning algorithms like multi-layer neural networks, support vector machines, Gaussian mixture models, and hidden Markov models. It was made available under the BSD license (free to copy and commercial use/proprietary modifications are allowed as long as attribution is preserved). TORCH - 2002 - HTTP://TORCH.CH The core API is inspired by object-oriented programming and design patterns — specifically, by the notions of modularity and separation of interface and implementation. The API contains useful abstractions like: ‣ DataSet ‣ Machine ‣ Measurer ‣ Trainer TORCH - 2002 - HTTP://TORCH.CH DATASET CLASS ‣ Responsible for loading data ‣ Relieves engineer of need to repeatedly write code to read training data and labels ‣ Provides an abstraction layer that allows data to be read from any source ‣ Design pattern: Proxy TORCH - 2002 - HTTP://TORCH.CH MACHINE CLASS ‣ Responsible for learning mapping from inputs to targets ‣ Several learning algorithms supported ‣ Multi-layer neural network ‣ Support vector machine ‣ “Distribution” ‣ Gaussian mixture model ‣ Hidden Markov model ‣ Design pattern: Adapter TORCH - 2002 - HTTP://TORCH.CH MEASURER CLASS ‣ Responsible for measuring the output of the machine ‣ Loss: mean squared error, log loss ‣ Metric: accuracy, F1, etc.
    [Show full text]
  • Deep Learning Software Security and Fairness of Deep Learning SP18 Today
    Deep Learning Software Security and Fairness of Deep Learning SP18 Today ● HW1 is out, due Feb 15th ● Anaconda and Jupyter Notebook ● Deep Learning Software ○ Keras ○ Theano ○ Numpy Anaconda ● A package management system for Python Anaconda ● A package management system for Python Jupyter notebook ● A web application that where you can code, interact, record and plot. ● Allow for remote interaction when you are working on the cloud ● You will be using it for HW1 Deep Learning Software Deep Learning Software Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Deep Learning Software: Most Popular Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Deep Learning Software: Today Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Mobile Platform ● Tensorflow Lite: ○ Released last November Why do we use deep learning frameworks? ● Easily build big computational graphs ○ Not the case in HW1 ● Easily compute gradients in computational graphs ● GPU support (cuDNN, cuBLA...etc) ○ Not required in HW1 Keras ● A high-level deep learning framework ● Built on other deep-learning frameworks ○ Theano ○ Tensorflow ○ CNTK ● Easy and Fun! Keras: A High-level Wrapper ● Pass on a layer of instances in the constructor ● Or: simply add layers. Make sure the dimensions match. Keras: Compile and train! Epoch: 1 epoch means going through all the training dataset once Numpy ● The fundamental package in Python for: ○ Scientific Computing ○ Data Science ● Think in terms of vectors/Matrices ○ Refrain from using for loops! ○ Similar to Matlab Numpy ● Basic vector operations ○ Sum, mean, argmax….
    [Show full text]
  • Introduction to Torch)
    Advanced Machine Learning 2014 (Introduction to Torch) Jost Tobias Springenberg, Jan W¨ulfing Martin Riedmiller fspringj,wuelfj,[email protected] Machine Learning Lab University of Freiburg November 25, 2014 Tobias Springenberg Machine Learning Lab - Uni FR AML 2014 (1) I Assignment: We will give out the second assignment training a MLP / ConvNet to recognize digits using torch + testing on a secret test-set ;) I Part 1: Reproduce results (maybe even state of the art) from previous work I Part 2: Train the best network you can design for MNIST and report classification error I Part 3: Give us up to 3 models that we can call using lua ! we will test this on our secret test-data I Regarding the presentation: As before I have borrowed slides/examples from the torch developers who do a much better job than me at explaining their work Today... I Torch: I will give a short (hopefully concise) introduction to torch Tobias Springenberg Machine Learning Lab - Uni FR AML 2014 (2) I Part 1: Reproduce results (maybe even state of the art) from previous work I Part 2: Train the best network you can design for MNIST and report classification error I Part 3: Give us up to 3 models that we can call using lua ! we will test this on our secret test-data I Regarding the presentation: As before I have borrowed slides/examples from the torch developers who do a much better job than me at explaining their work Today... I Torch: I will give a short (hopefully concise) introduction to torch I Assignment: We will give out the second assignment training a MLP / ConvNet to recognize digits using torch + testing on a secret test-set ;) Tobias Springenberg Machine Learning Lab - Uni FR AML 2014 (2) I Part 2: Train the best network you can design for MNIST and report classification error I Part 3: Give us up to 3 models that we can call using lua ! we will test this on our secret test-data I Regarding the presentation: As before I have borrowed slides/examples from the torch developers who do a much better job than me at explaining their work Today..
    [Show full text]
  • Deep Learning Software Spring 2020 Today
    Deep Learning Software Spring 2020 Today ● Homework 1 is out, due Feb 6 ● Conda and Jupyter Notebook ● Deep Learning Software ● Keras ● Tensorflow ● Numpy ● Google Cloud Platform Conda ● A package management system for Python Jupyter notebook ● A web application that where you can code, interact, record and plot. ● Allow for remote interaction when you are working on the cloud ● You will be using it for HW1 Deep Learning Software Deep Learning Software Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Deep Learning Software: Most Popular Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Deep Learning Software: Today Caffe(UCB) Caffe2(Facebook) Paddle (Baidu) Torch(NYU/Facebook) PyTorch(Facebook) CNTK(Microsoft) Theano(U Montreal) TensorFlow(Google) MXNet(Amazon) Keras (High Level Wrapper) Mobile Platform ● Tensorflow Lite: ○ Released last November Why do we use deep learning frameworks? ● Easily build big computational graphs ○ Not the case in HW1 ● Easily compute gradients in computational graphs ● GPU support (cuDNN, cuBLA...etc) ○ Not required in HW1 Keras ● A high-level deep learning framework ● Built on other deep-learning frameworks ○ Theano ○ Tensorflow ○ CNTK ● Easy and Fun! Keras: A High-level Wrapper ● Pass on a layer of instances in the constructor ● Or: simply add layers. Make sure the dimensions match. Keras: Compile and train! Epoch: 1 epoch means going through all the training dataset once Numpy ● The fundamental package in Python for: ○ Scientific Computing ○ Data Science ● Think in terms of vectors/Matrices ○ Refrain from using for loops! ○ Similar to Matlab Numpy ● Basic vector operations ○ Sum, mean, argmax….
    [Show full text]