Turbocharging the AI Pipeline with Python and Anaconda

Total Page:16

File Type:pdf, Size:1020Kb

Turbocharging the AI Pipeline with Python and Anaconda Turbocharging the AI Pipeline with Python and Anaconda Stan Seibert Director of Community Innovation @ Anaconda !1 What is AI? Augmented AI = Artificial Intelligence? • Create insight by combining: • Data • Software • Domain Expertise © 2018 Anaconda - Confidential & Proprietary !2 Automated Operational AI Is Hard A mature system might end up being (at “ most) 5% machine learning code and (at least) 95% glue code – Google AI Researchers ” Source: “Machine Learning: the High-Interest Credit Card of Technical Debt”, Google Inc, 2015 © 2018 Anaconda - Confidential & Proprietary !3 Critical Components • Open Source • Build on cutting-edge innovation in the community • Reproducible Environments • Manage the data science software environment • Team Collaboration • Share and collaborate with fellow data scientists and other stakeholders © 2018 Anaconda - Confidential & Proprietary 4 Deep Learning: A "Killer App" for GPUs !5 Examples: Image Captioning van Gogh https://research.googleblog.com/2016/09/show-and-tell-image-captioning-open.html © 2018 Anaconda 6 Examples: Time Series Analysis • Want to model and predict behavior in foreign exchange markets • GPU-trained RNN using previous 20 values to predict next value http://on-demand.gputechconf.com/gtc/2017/presentation/s7625-daniel-egloff-going-deeper-in-finance.pdf © 2018 Anaconda !7 Examples: Style Transfer Picasso van Gogh Monet http://genekogan.com/works/style-transfer/ © 2018 Anaconda 8 The Deep Learning Big Bang (Much) Bigger Training Sets Faster & Specialized Hardware Monet Open Source Tools Algorithm Research © 2018 Anaconda - Confidential & Proprietary 9 Python: The Language of Deep Learning TensorFlow Keras PyTorch © 2018 Anaconda - Confidential & Proprietary 10 How do GPUs help? GPU CPU © 2018 Anaconda - Confidential & Proprietary !11 The GPU: A Parallel Execution Engine × 80 © 2018 Anaconda - Confidential & Proprietary !12 Critical Components • Open Source • Build on cutting-edge innovation in the community • Reproducible Environments • Manage the data science software environment • Team Collaboration • Share and collaborate with fellow data scientists and other stakeholders © 2018 Anaconda - Confidential & Proprietary 13 Expanding Beyond Deep Learning !14 What Else Do Data Scientists Want? • Data Preparation / ETL • Feature Engineering • Simulation • Scalable Analytics • Fast Visualization • ...ability to integrate tools together in a complete pipeline © 2018 Anaconda - Confidential & Proprietary 15 Numba: A JIT Compiler for Python • An open-source, function-at-a-time compiler library for Python • Compiler toolbox for different targets and execution models: • single-threaded CPU, multi-threaded CPU, GPU • regular functions, “universal functions” (array functions), etc • Speedup: 2x (compared to basic NumPy code) to 200x (compared to pure Python) • Combine ease of writing Python with speeds approaching FORTRAN • BSD licensed (including GPU compiler) • Goal is to empower scientists who make tools for themselves and other scientists © 2018 Anaconda, Inc. CUDA Python Denotes CUDA kernel function (launched from CPU, runs on GPU) Work with NumPy array elements and attributes on the GPU Special CUDA function for atomic addition Launch CUDA kernel with 32 blocks of 32 threads each © 2018 Anaconda, Inc. Numba in a Machine Learning World Simulation Causes Effects Machine Learning © 2018 Anaconda - Confidential & Proprietary !18 Numba in a Machine Learning World Simulation Cars on a Road Pixels in a Camera Machine Learning © 2018 Anaconda - Confidential & Proprietary !19 Numba in a Machine Learning World Numba Cars on a Road Pixels in a Camera TensorFlow © 2018 Anaconda - Confidential & Proprietary !20 GPU Array Interop Modifying these projects * to share GPU tensors through a common Python interface * PyTorch support in community PR © 2018 Anaconda - Confidential & Proprietary 21 Breaking down GPU data silos ETL/Data Machine Data Data Prep Learning Database Data Data Visualization GPU © 2018 Anaconda - Confidential & Proprietary !22 22 CPU transfer ETL/Data Machine Data Data Prep Learning CPU transfer CPU transfer Database Data Data Visualization GPU © 2018 Anaconda - Confidential & Proprietary !23 23 Better: Keep the Data on the GPU ETL/Data Machine Data Data Prep Learning Database Data Data Visualization GPU © 2018 Anaconda - Confidential & Proprietary !24 24 Apache Arrow • Rapidly becoming the standard for many kinds of structured data • Can Arrow work on the GPU? • Answer: Yes! © 2018 Anaconda - Confidential & Proprietary 25 Dataframes on the GPU • GPU Dataframes are Arrow-format data structures on the GPU • Designed to be passed between different applications, languages and runtimes: • Example: MapD database -> Python notebook -> XGBoost • RAPIDS dataframe support includes 3 tier library structure (names to be changed soon): • libgdf: C library of GPU dataframe functions • pygdf: Python wrapper around GDF + JIT functionality • dask_gdf: Distributed GPU dataframes © 2018 Anaconda - Confidential & Proprietary 26 Dask: Distributed Computing Made Easy • Scalable execution task graphs of task graphs from single computers to 1000+ node clusters • Orchestrate CPU and GPU tasks on data structures distributed across many nodes © 2018 Anaconda - Confidential & Proprietary !27! 27 Reproducible Environments !28 Moving From Tools to Projects • Every data science project is a software integration problem • Need to bring together many tools, plus their dependencies • Important to be able to record state of environment for future reproducibility • Often latest versions of libraries • Other times want older versions of libraries • Want to be able to easily compare different versions © 2018 Anaconda - Confidential & Proprietary 29 © 2018 Anaconda - Confidential & Proprietary 30 Conda Package Manager B • Language independent A Numba • Platform independent NumPy NumPy C • No special privileges required v1.10 v1.11 Pandas cudatoolkit R • No VMs or containers v0.16 9.2 Essentials • Enables: Python v2.7 Python v3.6 R • Reproducibility • Collaboration conda • Scaling © 2018 Anaconda - Confidential & Proprietary !31 31 The Deep Learning Technology Stack NEURAL NETWORKS CHAINER KERAS TENSOR MATH CUPY MXNET TENSORFLOW ...and many others PRIMITIVES MKL-DNN CUBLAS CUDNN NCCL OS/DRIVERS MANY-CORE CPU HARDWARE MULTI-CORE CPU GPU (XEON PHI) © 2018 Anaconda - Confidential & Proprietary 32 The Deep Learning Technology Stack Installable with Conda NEURAL NETWORKS CHAINER KERAS TENSOR MATH CUPY MXNET TENSORFLOW PRIMITIVES MKL-DNN CUBLAS CUDNN NCCL OS/DRIVERS MANY-CORE CPU HARDWARE MULTI-CORE CPU GPU (XEON PHI) © 2018 Anaconda - Confidential & Proprietary 33 GPU-Accelerated Packages in Anaconda © 2018 Anaconda - Confidential & Proprietary !34 Starting a Deep Learning Project conda create -n deeplearn python=3.6 notebook keras tensorflow-gpu conda activate deeplearn jupyter notebook Force a particular CUDA toolkit: conda create -n deeplearn python=3.6 notebook keras tensorflow-gpu \ cudatoolkit=9.0 © 2018 Anaconda - Confidential & Proprietary !35 Jupyter Notebooks: DL Narratives © 2018 Anaconda - Confidential & Proprietary 36 Packing Models with Conda TensorFlow Keras My Model Pandas My App Tornado © 2018 Anaconda 37 Data Science For Teams !38 GPU Needs for Data Science Teams • Centralized access • High end GPUs are most easily managed in the data center • Data scientists connect remotely to train and run their GPU-accelerated models • Resource management • Best practice is to allow only one application to use a GPU at a time. • Need to reserve GPU exclusively for a user when they are running their GPU code • Support for varying compute needs • Many projects don't need a GPU, some need one, a very small number need two or more • Most cost-efficient approach is a heterogenous cluster with mixture of GPU and non-GPU nodes © 2018 Anaconda - Confidential & Proprietary !39 Anaconda Enterprise: An AI Enablement Platform For Teams At Scale From One Data Scientist to Thousands From One Machine to Thousands Anaconda Enterprise 5 © 2018 Anaconda !40 Anaconda Enterprise: Cloud Native AI • Anaconda is the industry’s trusted distributor for all core Core AI AI technologies Technologies Governance • Simplifies and automates AI governance and reproducibility Cloud Native • Cloud Native: Modern, Approach dynamic, API-oriented, and container-based © 2018 Anaconda - Confidential & Proprietary !41 © 2018 Anaconda - Confidential & Proprietary !42 Heterogenous Clusters AE Worker (CPU) AE Worker (GPU) AE Master (Node 1) AE Services Editor Sessions Object Storage Database Git UI Session 2 (TensorFlow) Persistent Disk kube-apiserver Deploy Storage etcd Sessions docker Editor Sessions Session 4 (PyTorch) kubelet Session 1 (Py) Session 3 (Py) Deployments Deployments AE Kubernetes Masters (Nodes 2 and 3) Deployment 2 (TF) Deployment 1 (Py) kube-apiserver Deployment 3 (Py) etcd docker docker docker kubelet kubelet kubelet © 2018 Anaconda - Confidential & Proprietary 19!43 Projects that need GPU resources can request them © 2018 Anaconda - Confidential & Proprietary !44 GPU device available to TensorFlow running in notebook © 2018 Anaconda !45 © 2018 Anaconda !46 Varying Hardware Needs Inference in batches, or one at a time? Do you need GPU- GPU only 30% faster w/ batch size = 1 GPU 10x faster when levels of throughput batch size >= 32 in production? © 2018 Anaconda 47 Conclusion • GPU-accelerated AI with Anaconda brings together many parts: • Open source technology: Deep learning, GPU dataframes, Numba for custom algorithms and simulation • Reproducible environments: Conda package manager, Anaconda Distribution • Team collaboration: Resource management and deployment with Anaconda Enterprise • Learn more at: https://www.anaconda.com/ © 2018 Anaconda - Confidential & Proprietary !48 Questions? © 2018 Anaconda - Confidential & Proprietary !49.
Recommended publications
  • Python on Gpus (Work in Progress!)
    Python on GPUs (work in progress!) Laurie Stephey GPUs for Science Day, July 3, 2019 Rollin Thomas, NERSC Lawrence Berkeley National Laboratory Python is friendly and popular Screenshots from: https://www.tiobe.com/tiobe-index/ So you want to run Python on a GPU? You have some Python code you like. Can you just run it on a GPU? import numpy as np from scipy import special import gpu ? Unfortunately no. What are your options? Right now, there is no “right” answer ● CuPy ● Numba ● pyCUDA (https://mathema.tician.de/software/pycuda/) ● pyOpenCL (https://mathema.tician.de/software/pyopencl/) ● Rewrite kernels in C, Fortran, CUDA... DESI: Our case study Now Perlmutter 2020 Goal: High quality output spectra Spectral Extraction CuPy (https://cupy.chainer.org/) ● Developed by Chainer, supported in RAPIDS ● Meant to be a drop-in replacement for NumPy ● Some, but not all, NumPy coverage import numpy as np import cupy as cp cpu_ans = np.abs(data) #same thing on gpu gpu_data = cp.asarray(data) gpu_temp = cp.abs(gpu_data) gpu_ans = cp.asnumpy(gpu_temp) Screenshot from: https://docs-cupy.chainer.org/en/stable/reference/comparison.html eigh in CuPy ● Important function for DESI ● Compared CuPy eigh on Cori Volta GPU to Cori Haswell and Cori KNL ● Tried “divide-and-conquer” approach on both CPU and GPU (1, 2, 5, 10 divisions) ● Volta wins only at very large matrix sizes ● Major pro: eigh really easy to use in CuPy! legval in CuPy ● Easy to convert from NumPy arrays to CuPy arrays ● This function is ~150x slower than the cpu version! ● This implies there
    [Show full text]
  • Introduction Shrinkage Factor Reference
    Comparison study for implementation efficiency of CUDA GPU parallel computation with the fast iterative shrinkage-thresholding algorithm Younsang Cho, Donghyeon Yu Department of Statistics, Inha university 4. TensorFlow functions in Python (TF-F) Introduction There are some functions executed on GPU in TensorFlow. So, we implemented our algorithm • Parallel computation using graphics processing units (GPUs) gets much attention and is just using that functions. efficient for single-instruction multiple-data (SIMD) processing. 5. Neural network with TensorFlow in Python (TF-NN) • Theoretical computation capacity of the GPU device has been growing fast and is much higher Neural network model is flexible, and the LASSO problem can be represented as a simple than that of the CPU nowadays (Figure 1). neural network with an ℓ1-regularized loss function • There are several platforms for conducting parallel computation on GPUs using compute 6. Using dynamic link library in Python (P-DLL) unified device architecture (CUDA) developed by NVIDIA. (Python, PyCUDA, Tensorflow, etc. ) As mentioned before, we can load DLL files, which are written in CUDA C, using "ctypes.CDLL" • However, it is unclear what platform is the most efficient for CUDA. that is a built-in function in Python. 7. Using dynamic link library in R (R-DLL) We can also load DLL files, which are written in CUDA C, using "dyn.load" in R. FISTA (Fast Iterative Shrinkage-Thresholding Algorithm) We consider FISTA (Beck and Teboulle, 2009) with backtracking as the following: " Step 0. Take �! > 0, some � > 1, and �! ∈ ℝ . Set �# = �!, �# = 1. %! Step k. � ≥ 1 Find the smallest nonnegative integers �$ such that with �g = � �$&# � �(' �$ ≤ �(' �(' �$ , �$ .
    [Show full text]
  • Prototyping and Developing GPU-Accelerated Solutions with Python and CUDA Luciano Martins and Robert Sohigian, 2018-11-22 Introduction to Python
    Prototyping and Developing GPU-Accelerated Solutions with Python and CUDA Luciano Martins and Robert Sohigian, 2018-11-22 Introduction to Python GPU-Accelerated Computing NVIDIA® CUDA® technology Why Use Python with GPUs? Agenda Methods: PyCUDA, Numba, CuPy, and scikit-cuda Summary Q&A 2 Introduction to Python Released by Guido van Rossum in 1991 The Zen of Python: Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Interpreted language (CPython, Jython, ...) Dynamically typed; based on objects 3 Introduction to Python Small core structure: ~30 keywords ~ 80 built-in functions Indentation is a pretty serious thing Dynamically typed; based on objects Binds to many different languages Supports GPU acceleration via modules 4 Introduction to Python 5 Introduction to Python 6 Introduction to Python 7 GPU-Accelerated Computing “[T]the use of a graphics processing unit (GPU) together with a CPU to accelerate deep learning, analytics, and engineering applications” (NVIDIA) Most common GPU-accelerated operations: Large vector/matrix operations (Basic Linear Algebra Subprograms - BLAS) Speech recognition Computer vision 8 GPU-Accelerated Computing Important concepts for GPU-accelerated computing: Host ― the machine running the workload (CPU) Device ― the GPUs inside of a host Kernel ― the code part that runs on the GPU SIMT ― Single Instruction Multiple Threads 9 GPU-Accelerated Computing 10 GPU-Accelerated Computing 11 CUDA Parallel computing
    [Show full text]
  • Tangent: Automatic Differentiation Using Source-Code Transformation for Dynamically Typed Array Programming
    Tangent: Automatic differentiation using source-code transformation for dynamically typed array programming Bart van Merriënboer Dan Moldovan Alexander B Wiltschko MILA, Google Brain Google Brain Google Brain [email protected] [email protected] [email protected] Abstract The need to efficiently calculate first- and higher-order derivatives of increasingly complex models expressed in Python has stressed or exceeded the capabilities of available tools. In this work, we explore techniques from the field of automatic differentiation (AD) that can give researchers expressive power, performance and strong usability. These include source-code transformation (SCT), flexible gradient surgery, efficient in-place array operations, and higher-order derivatives. We implement and demonstrate these ideas in the Tangent software library for Python, the first AD framework for a dynamic language that uses SCT. 1 Introduction Many applications in machine learning rely on gradient-based optimization, or at least the efficient calculation of derivatives of models expressed as computer programs. Researchers have a wide variety of tools from which they can choose, particularly if they are using the Python language [21, 16, 24, 2, 1]. These tools can generally be characterized as trading off research or production use cases, and can be divided along these lines by whether they implement automatic differentiation using operator overloading (OO) or SCT. SCT affords more opportunities for whole-program optimization, while OO makes it easier to support convenient syntax in Python, like data-dependent control flow, or advanced features such as custom partial derivatives. We show here that it is possible to offer the programming flexibility usually thought to be exclusive to OO-based tools in an SCT framework.
    [Show full text]
  • Python Guide Documentation 0.0.1
    Python Guide Documentation 0.0.1 Kenneth Reitz 2015 11 07 Contents 1 3 1.1......................................................3 1.2 Python..................................................5 1.3 Mac OS XPython.............................................5 1.4 WindowsPython.............................................6 1.5 LinuxPython...............................................8 2 9 2.1......................................................9 2.2...................................................... 15 2.3...................................................... 24 2.4...................................................... 25 2.5...................................................... 27 2.6 Logging.................................................. 31 2.7...................................................... 34 2.8...................................................... 37 3 / 39 3.1...................................................... 39 3.2 Web................................................... 40 3.3 HTML.................................................. 47 3.4...................................................... 48 3.5 GUI.................................................... 49 3.6...................................................... 51 3.7...................................................... 52 3.8...................................................... 53 3.9...................................................... 58 3.10...................................................... 59 3.11...................................................... 62
    [Show full text]
  • Hyperlearn Documentation Release 1
    HyperLearn Documentation Release 1 Daniel Han-Chen Jun 19, 2020 Contents 1 Example code 3 1.1 hyperlearn................................................3 1.1.1 hyperlearn package.......................................3 1.1.1.1 Submodules......................................3 1.1.1.2 hyperlearn.base module................................3 1.1.1.3 hyperlearn.linalg module...............................7 1.1.1.4 hyperlearn.utils module................................ 10 1.1.1.5 hyperlearn.random module.............................. 11 1.1.1.6 hyperlearn.exceptions module............................. 11 1.1.1.7 hyperlearn.multiprocessing module.......................... 11 1.1.1.8 hyperlearn.numba module............................... 11 1.1.1.9 hyperlearn.solvers module............................... 12 1.1.1.10 hyperlearn.stats module................................ 14 1.1.1.11 hyperlearn.big_data.base module........................... 15 1.1.1.12 hyperlearn.big_data.incremental module....................... 15 1.1.1.13 hyperlearn.big_data.lsmr module........................... 15 1.1.1.14 hyperlearn.big_data.randomized module....................... 16 1.1.1.15 hyperlearn.big_data.truncated module........................ 17 1.1.1.16 hyperlearn.decomposition.base module........................ 18 1.1.1.17 hyperlearn.decomposition.NMF module....................... 18 1.1.1.18 hyperlearn.decomposition.PCA module........................ 18 1.1.1.19 hyperlearn.decomposition.PCA module........................ 18 1.1.1.20 hyperlearn.discriminant_analysis.base
    [Show full text]
  • Zmcintegral: a Package for Multi-Dimensional Monte Carlo Integration on Multi-Gpus
    ZMCintegral: a Package for Multi-Dimensional Monte Carlo Integration on Multi-GPUs Hong-Zhong Wua,∗, Jun-Jie Zhanga,∗, Long-Gang Pangb,c, Qun Wanga aDepartment of Modern Physics, University of Science and Technology of China bPhysics Department, University of California, Berkeley, CA 94720, USA and cNuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA Abstract We have developed a Python package ZMCintegral for multi-dimensional Monte Carlo integration on multiple Graphics Processing Units(GPUs). The package employs a stratified sampling and heuristic tree search algorithm. We have built three versions of this package: one with Tensorflow and other two with Numba, and both support general user defined functions with a user- friendly interface. We have demonstrated that Tensorflow and Numba help inexperienced scientific researchers to parallelize their programs on multiple GPUs with little work. The precision and speed of our package is compared with that of VEGAS for two typical integrands, a 6-dimensional oscillating function and a 9-dimensional Gaussian function. The results show that the speed of ZMCintegral is comparable to that of the VEGAS with a given precision. For heavy calculations, the algorithm can be scaled on distributed clusters of GPUs. Keywords: Monte Carlo integration; Stratified sampling; Heuristic tree search; Tensorflow; Numba; Ray. PROGRAM SUMMARY Manuscript Title: ZMCintegral: a Package for Multi-Dimensional Monte Carlo Integration on Multi-GPUs arXiv:1902.07916v2 [physics.comp-ph] 2 Oct 2019 Authors: Hong-Zhong Wu; Jun-Jie Zhang; Long-Gang Pang; Qun Wang Program Title: ZMCintegral Journal Reference: ∗Both authors contributed equally to this manuscript. Preprint submitted to Computer Physics Communications October 3, 2019 Catalogue identifier: Licensing provisions: Apache License Version, 2.0(Apache-2.0) Programming language: Python Operating system: Linux Keywords: Monte Carlo integration; Stratified sampling; Heuristic tree search; Tensorflow; Numba; Ray.
    [Show full text]
  • Python Guide Documentation Publicación 0.0.1
    Python Guide Documentation Publicación 0.0.1 Kenneth Reitz 17 de May de 2018 Índice general 1. Empezando con Python 3 1.1. Eligiendo un Interprete Python (3 vs. 2).................................3 1.2. Instalando Python Correctamente....................................5 1.3. Instalando Python 3 en Mac OS X....................................6 1.4. Instalando Python 3 en Windows....................................8 1.5. Instalando Python 3 en Linux......................................9 1.6. Installing Python 2 on Mac OS X.................................... 10 1.7. Instalando Python 2 en Windows.................................... 12 1.8. Installing Python 2 on Linux....................................... 13 1.9. Pipenv & Ambientes Virtuales...................................... 14 1.10. Un nivel más bajo: virtualenv...................................... 17 2. Ambientes de Desarrollo de Python 21 2.1. Your Development Environment..................................... 21 2.2. Further Configuration of Pip and Virtualenv............................... 26 3. Escribiendo Buen Código Python 29 3.1. Estructurando tu Proyecto........................................ 29 3.2. Code Style................................................ 40 3.3. Reading Great Code........................................... 49 3.4. Documentation.............................................. 50 3.5. Testing Your Code............................................ 53 3.6. Logging.................................................. 57 3.7. Common Gotchas...........................................
    [Show full text]
  • Aayush Mandhyan
    AAYUSH MANDHYAN New Brunswick, New Jersey | 848-218-8606 | LinkedIn | GitHub | Portfolio | [email protected] Education Rutgers University, New Brunswick September 2018 – May 2020 • Master of Science in Data Science, CGPA – 3.75/4 • Relevant Coursework: Machine Learning, Reinforcement Learning, Introduction to Artificial Intelligence, Data Interaction and Visual Analytics, Massive Data Storage and Retrieval, Probability and Statistical Inference. SRM University, NCR Campus, Ghaziabad, India August 2012 – May 2016 • Bachelor of technology in Computer Science and Engineering, CGPA – 8/10 Skills • Algorithms: Q-Learning, Neural Networks, LSTM, RNN, CNN, Auto-Encoders, XGBoost, SVM, Random Forest, Decision Trees, Logistic Regression, Lasso Regression, Ridge Regression, KNN, etc. • Languages: Python, R, Java, HTML, JavaScript • Libraries: TensorFlow, PyTorch, Keras, Scikit-learn, XGBoost, NumPy, Pandas, Matplotlib, CuPy, Numba, OpenCV, PySpark, NLTK, Gensim, Flask, R Shiny. • Tools: MySQL, NoSQL, MongoDB, REST API, Linux, Git, Jupyter, AWS, GCP, Openstack, Docker. • Technologies: Deep Learning, Reinforcement Learning, Machine Learning, Computer Vision, Natural Language Processing, Time Series Analytics, Data Mining, Data Analysis, Predictive Modelling. Work Experience Exafluence Inc., Data Scientist Intern February 2020 – May 2020 • Anomaly Detection System (Anomaly Detection, Python, KMeans, One-C SVM, GMM, R Shiny): - Designed and built Anomaly Detection ML Model using KMeans to achieve 0.9 F1 Score. - Implemented Deep Auto-Encoder Gaussian Mixture Model using TensorFlow from scratch (paper). - Designed and Developed a demo application in RShiny - Demo Rutgers University, Research Intern May 2019 – September 2019 • Adaptive Real Time Machine Learning Platform - ARTML (Python, CUDA, TensorFlow, CuPy, Numba, PyTorch): - Designed and Built GPU modules for ARTML using CuPy library, with 50% performance boost over CPU modules.
    [Show full text]
  • Tensorflow, Pytorch and Horovod
    Tensorflow, Pytorch and Horovod Corey Adams Assistant Computer Scientist, ALCF 1 Argonne Leadership1 ComputingArgonne Leadership Facility Computing Facility [email protected] Corey Adams Prasanna Taylor Childers Murali Emani Elise Jennings Xiao-Yong Jin Murat Keceli Balaprakash Adrian Pope Bethany Lusch Alvaro Vazquez Mayagoitia Misha Salim Himanshu Sharma Ganesh Sivaraman Tom Uram Antonio Villarreal Venkat Vishwanath Huihuo Zheng 2 Argonne Leadership Computing Facility ALCF Datascience Group Supports … • Software: optimized builds of important ML and DL software (tensorflow, pytorch, horovod) • Projects: datascience members work with ADSP projects, AESP projects, and other user projects to help users deploy their science on ALCF systems • Users: we are always interested to get feedback and help the users of big data and learning projects, whether you’re reporting a bug or telling us you got great performance. 3 Argonne Leadership Computing Facility What is Deep Learning? Photo from Analytics Vidhya https://thispersondoesnotexist.com Deep learning is … • an emerging exploding field of research that is transforming how we do science. • able to outperform humans on many complex tasks such as classification, segmentation, regression • able to replace data and simulation with hyper realistic generated data • expensive and time consuming to train 4 Argonne Leadership Computing Facility Deep Learning and Machine Learning on Aurora Aurora will be an Exascale system highly optimized for Deep Learning • Intel’s discrete GPUs (Xe) will drive accelerated single node performance • Powerful CPUs will keep your GPUs fed with data and your python script moving along quickly • High performance interconnect will let you scale your model training and inference to large scales. • Optimized IO systems will ensure you can keep a distributed training fed with data at scale.
    [Show full text]
  • Numba: a Compiler for Python Functions
    Numba: A Compiler for Python Functions Stan Seibert Director of Community Innovation @ Anaconda © 2018 Anaconda, Inc. My Background • 2008: Ph.D. on the Sudbury Neutrino Observatory • 2008-2013: Postdoc working on SNO, SNO+, LBNE (before it was DUNE), MiniCLEAN dark matter experiment • 2013-2014: Worked at vehicle tracking and routing company (multi-traveling salesman problem) • 2014-present: Employed at Anaconda (formerly Continuum Analytics), the creators of the Anaconda Python/R Distribution, as well many open source projects. Currently, I: • Manage the Numba project • Coordinate all of our open source projects • Bunch of other things (it's a startup!) © 2018 Anaconda, Inc. A Compiler for Python? Striking a Balance Between Productivity and Performance • Python has become a very popular language for scientific computing • Python integrates well with compiled, accelerated libraries (MKL, TensorFlow, ROOT, etc) • But what about custom algorithms and data processing tasks? • Our goal was to make a compiler that: • Works within the standard Python interpreter, and does not replace it • Integrates tightly with NumPy • Compatible with both multithreaded and distributed computing paradigms • Can be targeted at non-CPU hardware © 2018 Anaconda, Inc. Numba: A JIT Compiler for Python • An open-source, function-at-a-time compiler library for Python • Compiler toolbox for different targets and execution models: • single-threaded CPU, multi-threaded CPU, GPU • regular functions, “universal functions” (array functions), etc • Speedup: 2x (compared to basic NumPy code) to 200x (compared to pure Python) • Combine ease of writing Python with speeds approaching FORTRAN • BSD licensed (including GPU compiler) • Goal is to empower scientists who make tools for themselves and other scientists © 2018 Anaconda, Inc.
    [Show full text]
  • Or How to Avoid Recoding Your (Entire) Application in Another Language Before You Start
    Speeding up Python Or how to avoid recoding your (entire) application in another language Before you Start ● Most Python projects are “fast enough”. This presentation is not for them. ● Profle your code, to identify the slow parts AKA “hot spots”. ● The results can be surprising at times. ● Don’t bother speeding up the rest of your code. Improving your Python ● Look into using a better algorithm or datastructure next. ● EG: Sorting inside a loop is rarely a good idea, and might be better handled with a tree or SortedDict. Add Caching ● Memorize expensive-to-compute (and recompute) values. ● Can be done relatively easily using functools.lru_cache in 3.2+. Parallelise ● Multicore CPU’s have become common. ● Probably use threading or multiprocessing. Threading ● CPython does not thread CPU-bound tasks well. ● CPython can thread I/O-bound tasks well. ● CPython cannot thread a mix of CPU-bound and I/ O-bound well. ● Jython, IronPython and MicroPython can thread arbitrary workloads well. ● Data lives in the same threaded process. This is good for speed, bad for reliability. Multiprocessing ● Works with most Python implementations. ● Is a bit slower than threading when threading is working well. ● Gives looser coupling than threading, so is easier to “get right”. ● Can pass data from one process to another using shared memory. Numba ● If you’re able to isolate your performance issue to a small number of functions or methods (callables), consider using Numba. ● It’s just a decorator – simple. ● It has “object mode” (which doesn’t help much), and “nopython mode” (which can help a lot, but is more restrictive).
    [Show full text]