Qualitative Benchmarking of Deep Learning Hardware and Frameworks: Review and Tutorial
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
SOL: Effortless Device Support for AI Frameworks Without Source Code Changes
SOL: Effortless Device Support for AI Frameworks without Source Code Changes Nicolas Weber and Felipe Huici NEC Laboratories Europe Abstract—Modern high performance computing clusters heav- State of the Art Proposed with SOL ily rely on accelerators to overcome the limited compute power API (Python, C/C++, …) API (Python, C/C++, …) of CPUs. These supercomputers run various applications from different domains such as simulations, numerical applications or Framework Core Framework Core artificial intelligence (AI). As a result, vendors need to be able to Device Backends SOL efficiently run a wide variety of workloads on their hardware. In the AI domain this is in particular exacerbated by the Fig. 1: Abstraction layers within AI frameworks. existance of a number of popular frameworks (e.g, PyTorch, TensorFlow, etc.) that have no common code base, and can vary lines of code to their scripts in order to enable SOL and its in functionality. The code of these frameworks evolves quickly, hardware support. making it expensive to keep up with all changes and potentially We explore two strategies to integrate new devices into AI forcing developers to go through constant rounds of upstreaming. frameworks using SOL as a middleware, to keep the original In this paper we explore how to provide hardware support in AI frameworks without changing the framework’s source code in AI framework unchanged and still add support to new device order to minimize maintenance overhead. We introduce SOL, an types. The first strategy hides the entire offloading procedure AI acceleration middleware that provides a hardware abstraction from the framework, and the second only injects the necessary layer that allows us to transparently support heterogenous hard- functionality into the framework to enable the execution, but ware. -
Machine Learning and Deep Learning for IIOT
Machine Learning and Deep Learning for IIOT Chanchal Chatterjee, Dell EMC Reston, March 22 2016 1 Goals of the Meeting ➢ Provide insights on methods and systems for machine learning and deep learning. ➢ Provide machine/deep learning use cases for IIOT. ➢ Provide architectures and frameworks for machine/deep learning for IIOT. 2 Machine Learning & Deep Learning – Confusing, Eh! From Machine Learning Mastery (http://machinelearningmastery.com/) 3 Machine Learning and Deep Learning Dependencies • Types of Data • Types of Learning • Types of Algorithms 4 Types of Data • Structured Data • Time Series • Events • Graph • Unstructured Data • Video/Images • Voice • Text 5 Types of Learning • Un-Supervised • Do not require training data • Assume normal instances far more frequent than anomalies • Semi-Supervised • Training data has labeled instances for only the normal class • Assume normal instances far more frequent than anomalies • Supervised 6 Types of Algorithms • ML: Machine Learning • Anomaly Detection • Trends, Predictions & Forecasting • Association & Grouping • DL: Deep Learning • Ladder Network • Convolutional Neural Network • Recurrent Neural Network • Deep Belief Networks 7 Some Details 8 Machine Learning • Anomaly Detection • Point Anomaly • Contextual Anomaly • Collective Anomaly • Graph Anomaly • Trends, Predictions & Forecasting • Associations & Grouping 9 Deep Learning • Ladder Network • Convolutional NN (CNN) • Recurrent NN (RNN) • Recurrent Recursive NN (R2NN) • Long Short Term Memory (LSTM) • Deep Belief Networks (DBM) • Restricted -
Building Machine Learning Inference Pipelines at Scale
Building Machine Learning inference pipelines at scale Julien Simon Global Evangelist, AI & Machine Learning @julsimon © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. Problem statement • Real-life Machine Learning applications require more than a single model. • Data may need pre-processing: normalization, feature engineering, dimensionality reduction, etc. • Predictions may need post-processing: filtering, sorting, combining, etc. Our goal: build scalable ML pipelines with open source (Spark, Scikit-learn, XGBoost) and managed services (Amazon EMR, AWS Glue, Amazon SageMaker) © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. Apache Spark https://spark.apache.org/ • Open-source, distributed processing system • In-memory caching and optimized execution for fast performance (typically 100x faster than Hadoop) • Batch processing, streaming analytics, machine learning, graph databases and ad hoc queries • API for Java, Scala, Python, R, and SQL • Available in Amazon EMR and AWS Glue © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. MLlib – Machine learning library https://spark.apache.org/docs/latest/ml-guide.html • Algorithms: classification, regression, clustering, collaborative filtering. • Featurization: feature extraction, transformation, dimensionality reduction. • Tools for constructing, evaluating and tuning pipelines • Transformer – a transform function that maps a DataFrame into a new -
Open Source in the Enterprise
Open Source in the Enterprise Andy Oram and Zaheda Bhorat Beijing Boston Farnham Sebastopol Tokyo Open Source in the Enterprise by Andy Oram and Zaheda Bhorat Copyright © 2018 O’Reilly Media. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online edi‐ tions are also available for most titles (http://oreilly.com/safari). For more information, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Editor: Michele Cronin Interior Designer: David Futato Production Editor: Kristen Brown Cover Designer: Karen Montgomery Copyeditor: Octal Publishing Services, Inc. July 2018: First Edition Revision History for the First Edition 2018-06-18: First Release The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Open Source in the Enterprise, the cover image, and related trade dress are trademarks of O’Reilly Media, Inc. The views expressed in this work are those of the authors, and do not represent the publisher’s views. While the publisher and the authors have used good faith efforts to ensure that the informa‐ tion and instructions contained in this work are accurate, the publisher and the authors disclaim all responsibility for errors or omissions, including without limitation responsibility for damages resulting from the use of or reliance on this work. Use of the information and instructions contained in this work is at your own risk. If any code samples or other technology this work contains or describes is subject to open source licenses or the intellectual property rights of others, it is your responsibility to ensure that your use thereof complies with such licenses and/or rights. -
Training Professionals and Researchers on Future Intelligent On-Board Embedded Heterogeneous Processing
[email protected] Optimized Development Environment (ODE) for e20xx/e2xx Intelligent onboard processing Training Professionals and Researchers on Future Intelligent On-board Embedded Heterogeneous Processing Description The Optimized Development Environment (ODE) is a mini-ITX compatible rapid engineering platform for the e2000 and e2100 reliable hetereogeneous compute products. The ODE provides flexible software development based on the Deep Delphi™ software library. Deep Delphi™ include Lightweight Ubuntu 16.04 LTS (AMD64) with UNiLINK kernel driver for extended IO and health monitoring through FPGA. Open source Linux libraries provide support for amdgpu kernel driver, AMD IOMMU kernel driver, AMD DDR RAM memory Error Correction Code (ECC), OpenGL, Vulkan, OpenCL (patched Mesa/Clover), Robotic Operating System (ROS), Open Computer Vision (OpenCV), and optional machine learning libraries (e.g. Caffe, Theano, TensorFlow, PlaidML). Tailoring beyond the standard FPGA functionality, requires additional FPGA design services by Troxel Aerospace Industries, Inc. FreeRTOS support is optional and may change the demo software flow between AMD SOC and FPGA. Software developed on ODE is compatible with the Deep Delphi iX5 platform. SPECIFICATION HIGHLIGHTS Processing e20xx family Ethernet 2×1000Tbase LAN (AMD) products e21xx family 1×1000Tbase LAN (FPGA) Power input ATX 4 pin (12 V) USB 2×USB3.0 (AMD SOC) 802.3at Type 2, 30W, PoE (FPGA LAN port) 4×USB2.0 (AMD SOC) ATX 20 pin (12, 5, 3.3 V) (Internal only) Doc. reference: 1004002 PCIexpress® 1×4 lanes (v2) (Internal) Graphics HDMI & LCD/LVDS Serial Ports 2×RS232 (FPGA) Storage 2×SATA v3.0 (6 Gbps) (1 x 120 GB SSD incl.) 2×UART TTL (AMD) 1×MicroSD-Card/MMC Debug AMD SmartProbe (Internal only) Board size 170 × 170 mm2 (mini-ITX compatible) FPGA JTAG interface (Internal) FPGA ARM Cortex-M3 interface (internal) BIOS recovery UNIBAP™ Safe Boot + SPI headers for DediProg. -
Serverless Predictions at Scale
Serverless Predictions at Scale Thomas Reske Global Solutions Architect, Amazon Web Services © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. Serverless computing allows you to build and run applications and services without thinking about servers © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. What are the benefits of serverless computing? No server management Flexible scaling $ Automated high availability No excess capacity © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. Machine Learning Process Fetch data Operations, Clean monitoring, and and optimization format data Integration Prepare and and deployment transform data Evaluation and Model Training verificaton © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. Machine Learning Process Fetch data Operations, Clean How can I use pre-trained monitoring, and and models for serving optimization format data predictions? How to scale effectively? Integration Prepare and and deployment transform data Evaluation and Model Training verificaton © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. What exactly are we deploying? © 2018, Amazon Web Services, Inc. or its affiliates. All rights reserved. A Pre-trained Model... • ... is simply a model which has been trained previously • Model artifacts (structure, helper files, parameters, weights, ...) are files, e.g. in JSON or binary format • Can be serialized (saved) and de-serialized (loaded) by common deep learning frameworks • Allows for fine-tuning of models („head-start“ or „freezing“), but also to apply them to similar problems and different data („transfer learning“) • Models can be made publicly available, e.g. in the MXNet Model Zoo © 2018, Amazon Web Services, Inc. -
Analytics Lens AWS Well-Architected Framework Analytics Lens AWS Well-Architected Framework
Analytics Lens AWS Well-Architected Framework Analytics Lens AWS Well-Architected Framework Analytics Lens: AWS Well-Architected Framework Copyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Analytics Lens AWS Well-Architected Framework Table of Contents Abstract ............................................................................................................................................ 1 Abstract .................................................................................................................................... 1 Introduction ...................................................................................................................................... 2 Definitions ................................................................................................................................. 2 Data Ingestion Layer ........................................................................................................... 2 Data Access and Security Layer ............................................................................................ 3 Catalog and Search Layer ................................................................................................... -
Ezgi Korkmaz Outline
KTH ROYAL INSTITUTE OF TECHNOLOGY BigDL: A Distributed Deep Learning Framework for Big Data ACM Symposium on Cloud Computing 2019 [Acceptance Rate: 24%] Jason Jinquan Dai, Yiheng Wang, Xin Qiu, Ding Ding, Yao Zhang, Yanzhang Wang, Xianyan Jia, Cherry Li Zhang, Yan Wan, Zhichao Li, Jiao Wang, Sheng- sheng Huang, Zhongyuan Wu, Yang Wang, Yuhao Yang, Bowen She, Dongjie Shi, Qi Lu, Kai Huang, Guoqiong Song. [Intel Corporation] Presenter: Ezgi Korkmaz Outline I Deep learning frameworks I BigDL applications I Motivation for end-to-end framework I Drawbacks of prior approaches I BigDL framework I Experimental Setup and Results of BigDL framework I Critique of the paper 2/16 Deep Learning Frameworks I Big demand from organizations to apply deep learning to big data I Deep learning frameworks: I Torch [2002 Collobert et al.] [ C, Lua] I Caffe [2014 Berkeley BAIR] [C++] I TensorFlow [2015 Google Brain] [C++, Python, CUDA] I Apache MXNet [2015 Apache Software Foundation] [C++] I Chainer [2015 Preferred Networks] [ Python] I Keras [2016 Francois Chollet] [Python] I PyTorch [2016 Facebook] [Python, C++, CUDA] I Apache Spark is an open-source distributed general-purpose cluster-computing framework. I Provides interface for programming clusters with data parallelism 3/16 BigDL I A library on top of Apache Spark I Provides integrated data-analytics within a unifed data analysis pipeline I Allows users to write their own deep learning applications I Running directly on big data clusters I Supports similar API to Torch and Keras I Supports both large scale distributed training and inference I Able to run across hundreds or thousands servers efficiently by uses underlying Spark framework 4/16 BigDL I Developed as an open source project I Used by I Mastercard I WorldBank I Cray I Talroo I UCSF I JD I UnionPay I GigaSpaces I Wide range of applications: transfer learning based image classification, object detection, feature extraction, sequence-to-sequence prediction, neural collaborative filtering for reccomendation etc. -
Apache Mxnet on AWS Developer Guide Apache Mxnet on AWS Developer Guide
Apache MXNet on AWS Developer Guide Apache MXNet on AWS Developer Guide Apache MXNet on AWS: Developer Guide Copyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved. Amazon's trademarks and trade dress may not be used in connection with any product or service that is not Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may or may not be affiliated with, connected to, or sponsored by Amazon. Apache MXNet on AWS Developer Guide Table of Contents What Is Apache MXNet? ...................................................................................................................... 1 iii Apache MXNet on AWS Developer Guide What Is Apache MXNet? Apache MXNet (MXNet) is an open source deep learning framework that allows you to define, train, and deploy deep neural networks on a wide array of platforms, from cloud infrastructure to mobile devices. It is highly scalable, which allows for fast model training, and it supports a flexible programming model and multiple languages. The MXNet library is portable and lightweight. It scales seamlessly on multiple GPUs on multiple machines. MXNet supports programming in various languages including Python, R, Scala, Julia, and Perl. This user guide has been deprecated and is no longer available. For more information on MXNet and related material, see the topics below. MXNet MXNet is an Apache open source project. For more information about MXNet, see the following open source documentation: • Getting Started – Provides details for setting up MXNet on various platforms, such as macOS, Windows, and Linux. -
Plaidml Documentation
PlaidML Documentation Vertex.AI Apr 11, 2018 Contents 1 Ubuntu Linux 3 2 Building and Testing PlaidML5 3 PlaidML Architecture Overview7 4 Tile Op Tutorial 9 5 PlaidML 13 6 Contributing to PlaidML 45 7 License 47 8 Reporting Issues 49 Python Module Index 51 i ii PlaidML Documentation A framework for making deep learning work everywhere. PlaidML is a multi-language acceleration framework that: • Enables practitioners to deploy high-performance neural nets on any device • Allows hardware developers to quickly integrate with high-level frameworks • Allows framework developers to easily add support for many kinds of hardware • Works on all major platforms - Linux, macOS, Windows For background and early benchmarks see our blog post announcing the release. PlaidML is under active development and should be thought of as alpha quality. Contents 1 PlaidML Documentation 2 Contents CHAPTER 1 Ubuntu Linux If necessary, install Python’s ‘pip’ tool. sudo add-apt-repository universe&& sudo apt update sudo apt install python-pip Make sure your system has OpenCL. sudo apt install clinfo clinfo If clinfo reports “Number of platforms” == 0, you must install a driver. If you have an NVIDIA graphics card: sudo add-apt-repository ppa:graphics-drivers/ppa&& sudo apt update sudo apt install nvidia-modprobe nvidia-384 nvidia-opencl-icd-384 libcuda1-384 If you have an AMD card, download the AMDGPU PRO driver and install according to AMD’s instructions. Best practices for python include judicious usage of Virtualenv, and we certainly recommend creating one just -
Advancing Compiler Optimizations for General-Purpose & Domain-Specific Parallel Architectures
ADVANCING COMPILER OPTIMIZATIONS FOR GENERAL-PURPOSE & DOMAIN-SPECIFIC PARALLEL ARCHITECTURES A Dissertation Presented to The Academic Faculty By Prasanth Chatarasi In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the School of Computer Science Georgia Institute of Technology December 2020 Copyright c Prasanth Chatarasi 2020 ADVANCING COMPILER OPTIMIZATIONS FOR GENERAL-PURPOSE & DOMAIN-SPECIFIC PARALLEL ARCHITECTURES Approved by: Dr. Vivek Sarkar, Advisor Dr. Tushar Krishna School of Computer Science School of Electrical and Computer Georgia Institute of Technology Engineering Georgia Institute of Technology Dr. Jun Shirako, Co-Advisor School of Computer Science Dr. Richard Vuduc Georgia Institute of Technology School of Computational Science and Engineering Dr. Santosh Pande Georgia Institute of Technology School of Computer Science Georgia Institute of Technology Date Approved: July 27, 2020 “Dream the impossible. Know that you are born in this world to do something wonderful and unique; don’t let this opportunity pass by. Give yourself the freedom to dream and think big.” — Sri Sri Ravi Shankar To universal consciousness, To my family, To my advisors, mentors, teachers, and friends. ACKNOWLEDGEMENTS Taittiriya Upanishad, Shikshavalli I.20 ma;a:ua;de;va;ea Ba;va ; a;pa:ua;de;va;ea Ba;va A;a;C+a.yRa;de;va;ea Ba;va A; a;ta; a;Ta;de;va;ea Ba;va m¯atrudevo bhava pitrudevo bhava ¯ach¯aryadevo bhava atithidevo bhava “Respects to Mother, Father, Guru, and Guest. They are all forms of God”. First and foremost, I would like to express my gratitude to my mother Ch. Anjanee Devi and my father Dr. -
Evaluation of Deep Learning Frameworks Over Different HPC Architectures
2017 IEEE 37th International Conference on Distributed Computing Systems Evaluation of Deep Learning Frameworks over Different HPC Architectures Shayan Shams∗, Richard Platania∗, Kisung Lee, and Seung-Jong Park Division of Computer Science and Engineering Center for Computation and Technology Baton Rouge, LA 70803, USA Email: {sshams2,rplatania,lee,sjpark}@cct.lsu.edu Abstract—Recent advances in deep learning have enabled three deep learning frameworks, several HPC environments, researchers across many disciplines to uncover new insights about and state-of-the-art hardware technologies. large datasets. Deep neural networks have shown applicability The trending development of deep learning tools is pro- to image, time-series, textual, and other data, all of which are available in a plethora of research fields. However, their viding users with more options that can cater specifically computational complexity and large memory overhead requires to their needs. Typically, these needs stem from available advanced software and hardware technologies to train neural hardware resources. For instance, a user with access to a large- networks in a reasonable amount of time. To make this possible, scale commodity cluster may aim to utilize a different deep there has been an influx in development of deep learning software learning tool than a user with access to a single multi-GPU that aim to leverage advanced hardware resources. In order to better understand the performance implications of deep learn- machine. This isn’t to say that one single tool isn’t suitable ing frameworks over these different resources, we analyze the for multiple environments. Rather, the different tools were performance of three different frameworks, Caffe, TensorFlow, created to include features that benefit certain hardware setups.