What's Inside Intel® Parallel Studio XE

Total Page:16

File Type:pdf, Size:1020Kb

What's Inside Intel® Parallel Studio XE Accelerate Parallel Code, Transform Enterprise to Cloud & HPC to AI Applications Klaus-Dieter Oertel Intel CVCG Developer Products Division CERN, 14 Nov 2018 What’s Inside Intel® Parallel Studio XE Comprehensive Software Development Tool Suite Cluster Edition Composer Edition Professional Edition BUILD ANALYZE SCALE Compilers & Libraries Analysis Tools Cluster Tools Intel® Math Kernel Library Intel® VTune™ Amplifier Intel® MPI Library C / C++, Performance Profiler Message Passing Interface Library Intel® Data Analytics Fortran Acceleration Library Compilers Intel® Inspector Intel® Trace Analyzer & Collector Intel Threading Building Blocks Memory & Thread Debugger MPI Tuning & Analysis C++ Threading Intel® Advisor Intel® Cluster Checker Intel® Integrated Performance Primitives Vectorization Optimization Cluster Diagnostic Expert System Image, Signal & Data Processing Thread Prototyping & Flow Graph Analysis Intel® Distribution for Python* High Performance Python Operating System: Windows*, Linux*, MacOS1* Intel® Architecture Platforms 1Available only in the Composer Edition. Optimization Notice Copyright © 2018, Intel Corporation. All rights reserved. 2 *Other names and brands may be claimed as the property of others. What’s New in the 2019 Version Intel® Parallel Studio XE: Accelerate Parallel Code, Transform Cloud, HPC & AI . Improve application performance on Intel® Xeon® Scalable and Core™ processors with new enhancements in compilers, performance libraries and analysis tools: – Vectorize and thread your code (using OpenMP*) to take advantage of the latest SIMD-enabled hardware, including Intel® Advanced Vector Extensions 512 (Intel® AVX-512) – Accelerate diverse workloads for enterprise, cloud, HPC and AI . Extend HPC solutions on the path to Exascale—gain greater scalability and reduce latency with next generation Intel® MPI Library. Use a new, more accessible user interface in Intel® VTune™ Amplifier for a simplified profiling workflow with familiar terminology and logical groupings. Preview a new platform profiler for longer, higher level performance analysis. Visualize parallelism with rapid visual prototyping environment— interactively build, validate, and visualize parallel algorithms with Intel® Advisor’s Flow Graph Analyzer. Speed machine learning by enabling new high performance Python* capabilities. Supports industry standards and IDEs. Optimization Notice Copyright © 2018, Intel Corporation. All rights reserved. Intel Confidential 3 *Other names and brands may be claimed as the property of others. Optimize Efficiently with Valuable Resources Shortcut Optimization Sign up now Intel® Parallel Studio XE Attend TEC Webinars! . Overview, features, support, code samples . Training materials, Tech.Decoded webinars, how-to videos & articles . Reviews & Case Studies . More Intel® Software Development Products Intel Code Modernization Program . Overview . Live training . Remote Access https://intel.ly/2PdkNhN Optimization Notice Copyright © 2018, Intel Corporation. All rights reserved. 4 *Other names and brands may be claimed as the property of others. Build Analyze SCALE Intel® C++ Compiler Intel® VTune™ Amplifier Intel® MPI Library Intel® Fortran Compiler Intel® Advisor Intel® Trace Analyzer & Collector Intel® Distribution for Python* Intel® Math Kernel Library Intel® Inspector Intel® Cluster Checker Intel® Integrated Performance Primitives Intel® Threading Building Blocks Intel® Data Analytics Acceleration Library Part of the Professional Edition Part of the Cluster Edition Included in Composer Edition Fast, Scalable, Parallel Code with Intel® Compilers Deliver Industry-leading C/C++ & Fortran Code Performance, Unleash the Power of the latest Intel® Processors . Develop optimized and vectorized code for Intel® architectures, including Intel® Xeon® processors . Leverage language and OpenMP* standards, and compatibility with leading compilers & IDEs Learn More: software.intel.com/intel-compilers Optimization Notice Copyright © 2018, Intel Corporation. All rights reserved. 6 *Other names and brands may be claimed as the property of others. What’s New in Intel® Compilers 2019 (19.0) Updates to All Versions Advance Support for Intel® Architecture—use Intel® Compilers to generate optimized code for Intel Atom® processor through Intel® Xeon® Scalable processors. Achieve Superior Parallel Performance—vectorize & thread your code (using OpenMP*) to take advantage of the latest SIMD-enabled hardware, including Intel® Advanced Vector Extensions 512 (Intel® AVX-512). What’s New in C++ What’s New in Fortran Additional C++17 Standard feature support Substantial Fortran 2018 support including . Enjoy improvements to lambda & constant expression support . Coarray features: EVENTS & COSHAPE . Improved GNU C++ & Microsoft C++ compiler compatibility . IMPORT statement enhancements Standards-driven parallelization for C++ . Default module accessibility developers Complete OpenMP 4.5 support; user-defined . Partial OpenMP* 51 support reductions . Modernize your code by using the latest parallelization . Check shape option for runtime array conformance checking specifications Optimization Notice 1 Copyright © 2018, Intel Corporation. All rights reserved. OpenMP 5 is currently a draft 7 *Other names and brands may be claimed as the property of others. Accelerate Python* with Intel® Distribution for Python* High Performance Python* for Scientific Computing, Data Analytics, Machine & Deep Learning Faster Performance Greater Productivity Ecosystem compatibility Performance Libraries, Parallelism, Prebuilt & Accelerated Packages Supports Python 2.7 & 3.6, Conda & PIP Multithreading, Language Extensions . Accelerated NumPy/SciPy/scikit-learn . Prebuilt & optimized packages for . Supports Python 2.7 & 3.6, optimizations with Intel® MKL1 & Intel® DAAL2 numerical computing, machine/deep integrated in Anaconda* Distribution learning, HPC, & data analytics . Data analytics, machine learning & deep . Distribution & optimized packages available learning with scikit-learn, pyDAAL, . Drop in replacement for existing Python- via Conda, PIP, APT GET, YUM, & DockerHub, TensorFlow* & Caffe* No code changes required numerical performance optimizations integrated in Anaconda Distribution . Scale with Numba* & Cython* . Jupyter* notebooks, Matplotlib included . Optimizations upstreamed to main Python . Includes optimized mpi4py, works with . Free download & free for all uses trunk Dask* & PySpark* including commercial deployment . Priority Support with Intel® Parallel Studio XE . Optimized for latest Intel® architecture Operating System: Windows*, Linux*, MacOS1* Intel® Architecture Platforms 1Intel® Math Kernel Library Learn More: software.intel.com/distribution-for-python 2Intel® Data Analytics Acceleration Library Optimization Notice 1Available only in Intel® Parallel Studio Composer Edition. Copyright © 2018, Intel Corporation. All rights reserved. 9 *Other names and brands may be claimed as the property of others. Faster Python* with Intel® Distribution for Python* Close to Native Code Scikit-learn* Performance Advance Performance Closer to Native Code with Intel® Distribution for Python* 2019 . Accelerated NumPy, SciPy, Scikit-learn for scientific Compared to stock Python packages on Intel® Xeon® processors computing, machine learning & data analytics 100% . Drop-in replacement for existing Python—no code 90% changes required 80% 70% . Highly optimized for the latest Intel® processors 60% 50% 40% What’s New in the 2019 Release 30% . Faster machine learning with Scikit-learn: Support Vector 20% Machine (SVM) & K-means prediction, accelerated with 10% 0% Intel® Data Analytics Acceleration Library 1K x 15K 1K x 15K 1M x 50 1Mx50 1M x 50 1M x 50 1M x 50 1M x 50 10K x 1K 10K x 1K Performance efficiency measured Performance efficiency cosine distcorrelation distkmeans.fitkmeans.predictlinear_reg.fitlinear_reg.predictridge_reg.fitridge_reg.predictsvm.fit svm.predict . Includes machine learning XGBoost library (Linux* only) code with DAALnative Intel® against (binary) (binary) . Also available as easy command line standalone install Stock Python Intel® Distribution for Python* 2019 Performance results are based on testing as of July 9, 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information, see Performance Benchmark Test Disclosure. Testing by Intel as of July 9, 2018. Configuration: Stock Python: python 3.6.6 hc3d631a_0 installed from conda, NumPy 1.15, numba 0.39.0, llvmlite 0.24.0, scipy 1.1.0, scikit-learn 0.19.2 installed from pip; Intel Python: Intel® Distribution for Python* 2019 Gold: python 3.6.5 intel_11, NumPy 1.14.3 intel_py36_5, mkl 2019.0 intel_101, mkl_fft 1.0.2 intel_np114py36_6,mkl_random 1.0.1 intel_np114py36_6, numba 0.39.0 intel_np114py36_0, llvmlite 0.24.0 intel_py36_0, scipy 1.1.0 intel_np114py36_6, scikit-learn 0.19.1 intel_np114py36_35; OS: CentOS Linux 7.3.1611, kernel 3.10.0-514.el7.x86_64; Hardware:
Recommended publications
  • Bench - Benchmarking the State-Of- The-Art Task Execution Frameworks of Many- Task Computing
    MATRIX: Bench - Benchmarking the state-of- the-art Task Execution Frameworks of Many- Task Computing Thomas Dubucq, Tony Forlini, Virgile Landeiro Dos Reis, and Isabelle Santos Illinois Institute of Technology, Chicago, IL, USA {tdubucq, tforlini, vlandeir, isantos1}@hawk.iit.edu Stanford University. Finally HPX is a general purpose C++ Abstract — Technology trends indicate that exascale systems will runtime system for parallel and distributed applications of any have billion-way parallelism, and each node will have about three scale developed by Louisiana State University and Staple is a orders of magnitude more intra-node parallelism than today’s framework for developing parallel programs from Texas A&M. peta-scale systems. The majority of current runtime systems focus a great deal of effort on optimizing the inter-node parallelism by MATRIX is a many-task computing job scheduling system maximizing the bandwidth and minimizing the latency of the use [3]. There are many resource managing systems aimed towards of interconnection networks and storage, but suffer from the lack data-intensive applications. Furthermore, distributed task of scalable solutions to expose the intra-node parallelism. Many- scheduling in many-task computing is a problem that has been task computing (MTC) is a distributed fine-grained paradigm that considered by many research teams. In particular, Charm++ [4], aims to address the challenges of managing parallelism and Legion [5], Swift [6], [10], Spark [1][2], HPX [12], STAPL [13] locality of exascale systems. MTC applications are typically structured as direct acyclic graphs of loosely coupled short tasks and MATRIX [11] offer solutions to this problem and have with explicit input/output data dependencies.
    [Show full text]
  • Adaptive Data Migration in Load-Imbalanced HPC Applications
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 10-16-2020 Adaptive Data Migration in Load-Imbalanced HPC Applications Parsa Amini Louisiana State University and Agricultural and Mechanical College Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Computer Sciences Commons Recommended Citation Amini, Parsa, "Adaptive Data Migration in Load-Imbalanced HPC Applications" (2020). LSU Doctoral Dissertations. 5370. https://digitalcommons.lsu.edu/gradschool_dissertations/5370 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. ADAPTIVE DATA MIGRATION IN LOAD-IMBALANCED HPC APPLICATIONS A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Computer Science by Parsa Amini B.S., Shahed University, 2013 M.S., New Mexico State University, 2015 December 2020 Acknowledgments This effort has been possible, thanks to the involvement and assistance of numerous people. First and foremost, I thank my advisor, Dr. Hartmut Kaiser, who made this journey possible with their invaluable support, precise guidance, and generous sharing of expertise. It has been a great privilege and opportunity for me be your student, a part of the STE||AR group, and the HPX development effort. I would also like to thank my mentor and former advisor at New Mexico State University, Dr.
    [Show full text]
  • Beowulf Clusters — an Overview
    WinterSchool 2001 Å. Ødegård Beowulf clusters — an overview Åsmund Ødegård April 4, 2001 Beowulf Clusters 1 WinterSchool 2001 Å. Ødegård Contents Introduction 3 What is a Beowulf 5 The history of Beowulf 6 Who can build a Beowulf 10 How to design a Beowulf 11 Beowulfs in more detail 12 Rules of thumb 26 What Beowulfs are Good For 30 Experiments 31 3D nonlinear acoustic fields 35 Incompressible Navier–Stokes 42 3D nonlinear water wave 44 Beowulf Clusters 2 WinterSchool 2001 Å. Ødegård Introduction Why clusters ? ² “Work harder” – More CPU–power, more memory, more everything ² “Work smarter” – Better algorithms ² “Get help” – Let more boxes work together to solve the problem – Parallel processing ² by Greg Pfister Beowulf Clusters 3 WinterSchool 2001 Å. Ødegård ² Beowulfs in the Parallel Computing picture: Parallel Computing MetaComputing Clusters Tightly Coupled Vector WS farms Pile of PCs NOW NT/Win2k Clusters Beowulf CC-NUMA Beowulf Clusters 4 WinterSchool 2001 Å. Ødegård What is a Beowulf ² Mass–market commodity off the shelf (COTS) ² Low cost local area network (LAN) ² Open Source UNIX like operating system (OS) ² Execute parallel application programmed with a message passing model (MPI) ² Anything from small systems to large, fast systems. The fastest rank as no.84 on todays Top500. ² The best price/performance system available for many applications ² Philosophy: The cheapest system available which solve your problem in reasonable time Beowulf Clusters 5 WinterSchool 2001 Å. Ødegård The history of Beowulf ² 1993: Perfect conditions for the first Beowulf – Major CPU performance advance: 80286 ¡! 80386 – DRAM of reasonable costs and densities (8MB) – Disk drives of several 100MBs available for PC – Ethernet (10Mbps) controllers and hubs cheap enough – Linux improved rapidly, and was in a usable state – PVM widely accepted as a cross–platform message passing model ² Clustering was done with commercial UNIX, but the cost was high.
    [Show full text]
  • Improving MPI Threading Support for Current Hardware Architectures
    University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 12-2019 Improving MPI Threading Support for Current Hardware Architectures Thananon Patinyasakdikul University of Tennessee, [email protected] Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Recommended Citation Patinyasakdikul, Thananon, "Improving MPI Threading Support for Current Hardware Architectures. " PhD diss., University of Tennessee, 2019. https://trace.tennessee.edu/utk_graddiss/5631 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a dissertation written by Thananon Patinyasakdikul entitled "Improving MPI Threading Support for Current Hardware Architectures." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the equirr ements for the degree of Doctor of Philosophy, with a major in Computer Science. Jack Dongarra, Major Professor We have read this dissertation and recommend its acceptance: Michael Berry, Michela Taufer, Yingkui Li Accepted for the Council: Dixie L. Thompson Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) Improving MPI Threading Support for Current Hardware Architectures A Dissertation Presented for the Doctor of Philosophy Degree The University of Tennessee, Knoxville Thananon Patinyasakdikul December 2019 c by Thananon Patinyasakdikul, 2019 All Rights Reserved. ii To my parents Thanawij and Issaree Patinyasakdikul, my little brother Thanarat Patinyasakdikul for their love, trust and support.
    [Show full text]
  • Intel Edge Computing Portfolio E-Booklet
    Intel Edge Computing Portfolio e-Booklet Learn how Intel products, developer tools, programs, and ecosystem solutions accelerate the development and deployment of edge computing solutions 2021 Febuary Release Legal Notices and Disclaimers Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer to learn more. Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. Other names and brands may be claimed as the property of others. Any third-party information referenced on this document is provided for information only. Intel does not endorse any specific third-party product or entity mentioned on this document. Intel, the Intel Logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Intel Statement on Product Usage Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right. © Intel Corporation. Contents Unmatched
    [Show full text]
  • Exascale Computing Project -- Software
    Exascale Computing Project -- Software Paul Messina, ECP Director Stephen Lee, ECP Deputy Director ASCAC Meeting, Arlington, VA Crystal City Marriott April 19, 2017 www.ExascaleProject.org ECP scope and goals Develop applications Partner with vendors to tackle a broad to develop computer spectrum of mission Support architectures that critical problems national security support exascale of unprecedented applications complexity Develop a software Train a next-generation Contribute to the stack that is both workforce of economic exascale-capable and computational competitiveness usable on industrial & scientists, engineers, of the nation academic scale and computer systems, in collaboration scientists with vendors 2 Exascale Computing Project, www.exascaleproject.org ECP has formulated a holistic approach that uses co- design and integration to achieve capable exascale Application Software Hardware Exascale Development Technology Technology Systems Science and Scalable and Hardware Integrated mission productive technology exascale applications software elements supercomputers Correctness Visualization Data Analysis Applicationsstack Co-Design Programming models, Math libraries and development environment, Tools Frameworks and runtimes System Software, resource Workflows Resilience management threading, Data Memory scheduling, monitoring, and management and Burst control I/O and file buffer system Node OS, runtimes Hardware interface ECP’s work encompasses applications, system software, hardware technologies and architectures, and workforce
    [Show full text]
  • Parallel Data Mining from Multicore to Cloudy Grids
    311 Parallel Data Mining from Multicore to Cloudy Grids Geoffrey FOXa,b,1 Seung-Hee BAEb, Jaliya EKANAYAKEb, Xiaohong QIUc, and Huapeng YUANb a Informatics Department, Indiana University 919 E. 10th Street Bloomington, IN 47408 USA b Computer Science Department and Community Grids Laboratory, Indiana University 501 N. Morton St., Suite 224, Bloomington IN 47404 USA c UITS Research Technologies, Indiana University, 501 N. Morton St., Suite 211, Bloomington, IN 47404 Abstract. We describe a suite of data mining tools that cover clustering, information retrieval and the mapping of high dimensional data to low dimensions for visualization. Preliminary applications are given to particle physics, bioinformatics and medical informatics. The data vary in dimension from low (2- 20), high (thousands) to undefined (sequences with dissimilarities but not vectors defined). We use deterministic annealing to provide more robust algorithms that are relatively insensitive to local minima. We discuss the algorithm structure and their mapping to parallel architectures of different types and look at the performance of the algorithms on three classes of system; multicore, cluster and Grid using a MapReduce style algorithm. Each approach is suitable in different application scenarios. We stress that data analysis/mining of large datasets can be a supercomputer application. Keywords. MPI, MapReduce, CCR, Performance, Clustering, Multidimensional Scaling Introduction Computation and data intensive scientific data analyses are increasingly prevalent. In the near future, data volumes processed by many applications will routinely cross the peta-scale threshold, which would in turn increase the computational requirements. Efficient parallel/concurrent algorithms and implementation techniques are the key to meeting the scalability and performance requirements entailed in such scientific data analyses.
    [Show full text]
  • High Performance Integration of Data Parallel File Systems and Computing
    HIGH PERFORMANCE INTEGRATION OF DATA PARALLEL FILE SYSTEMS AND COMPUTING: OPTIMIZING MAPREDUCE Zhenhua Guo Submitted to the faculty of the University Graduate School in partial fulfillment of the requirements for the degree Doctor of Philosophy in the Department of Computer Science Indiana University August 2012 Accepted by the Graduate Faculty, Indiana University, in partial fulfillment of the require- ments of the degree of Doctor of Philosophy. Doctoral Geoffrey Fox, Ph.D. Committee (Principal Advisor) Judy Qiu, Ph.D. Minaxi Gupta, Ph.D. David Leake, Ph.D. ii Copyright c 2012 Zhenhua Guo ALL RIGHTS RESERVED iii I dedicate this dissertation to my parents and my wife Mo. iv Acknowledgements First and foremost, I owe my sincerest gratitude to my advisor Prof. Geoffrey Fox. Throughout my Ph.D. research, he guided me into the research field of distributed systems; his insightful advice inspires me to identify the challenging research problems I am interested in; and his generous intelligence support is critical for me to tackle difficult research issues one after another. During the course of working with him, I learned how to become a professional researcher. I would like to thank my entire research committee: Dr. Judy Qiu, Prof. Minaxi Gupta, and Prof. David Leake. I am greatly indebted to them for their professional guidance, generous support, and valuable suggestions that were given throughout this research work. I am grateful to Dr. Judy Qiu for offering me the opportunities to participate into closely related projects. As a result, I obtained deeper understanding of related systems including Dryad and Twister, and could better position my research in the big picture of the whole research area.
    [Show full text]
  • MPICH Installer's Guide Version 3.3.2 Mathematics and Computer
    MPICH Installer's Guide∗ Version 3.3.2 Mathematics and Computer Science Division Argonne National Laboratory Abdelhalim Amer Pavan Balaji Wesley Bland William Gropp Yanfei Guo Rob Latham Huiwei Lu Lena Oden Antonio J. Pe~na Ken Raffenetti Sangmin Seo Min Si Rajeev Thakur Junchao Zhang Xin Zhao November 12, 2019 ∗This work was supported by the Mathematical, Information, and Computational Sci- ences Division subprogram of the Office of Advanced Scientific Computing Research, Sci- DAC Program, Office of Science, U.S. Department of Energy, under Contract DE-AC02- 06CH11357. 1 Contents 1 Introduction 1 2 Quick Start 1 2.1 Prerequisites ........................... 1 2.2 From A Standing Start to Running an MPI Program . 2 2.3 Selecting the Compilers ..................... 6 2.4 Compiler Optimization Levels .................. 7 2.5 Common Non-Default Configuration Options . 8 2.5.1 The Most Important Configure Options . 8 2.5.2 Using the Absoft Fortran compilers with MPICH . 9 2.6 Shared Libraries ......................... 9 2.7 What to Tell the Users ...................... 9 3 Migrating from MPICH1 9 3.1 Configure Options ........................ 10 3.2 Other Differences ......................... 10 4 Choosing the Communication Device 11 5 Installing and Managing Process Managers 12 5.1 hydra ............................... 12 5.2 gforker ............................... 12 6 Testing 13 7 Benchmarking 13 8 All Configure Options 14 i 1 INTRODUCTION 1 1 Introduction This manual describes how to obtain and install MPICH, the MPI imple- mentation from Argonne National Laboratory. (Of course, if you are reading this, chances are good that you have already obtained it and found this doc- ument, among others, in its doc subdirectory.) This Guide will explain how to install MPICH so that you and others can use it to run MPI applications.
    [Show full text]
  • 3.0 and Beyond
    MPICH: 3.0 and Beyond Pavan Balaji Computer Scien4st Group Lead, Programming Models and Run4me systems Argonne Naonal Laboratory MPICH: Goals and Philosophy § MPICH con4nues to aim to be the preferred MPI implementaons on the top machines in the world § Our philosophy is to create an “MPICH Ecosystem” TAU PETSc MPE Intel ADLB HPCToolkit MPI IBM Tianhe MPI MPI MPICH DDT Cray Math MPI MVAPICH Works Microso3 Totalview MPI ANSYS Pavan Balaji, Argonne Na1onal Laboratory MPICH on the Top Machines § 7/10 systems use MPICH 1. Titan (Cray XK7) exclusively 2. Sequoia (IBM BG/Q) § #6 One of the top 10 systems 3. K Computer (Fujitsu) uses MPICH together with other 4. Mira (IBM BG/Q) MPI implementaons 5. JUQUEEN (IBM BG/Q) 6. SuperMUC (IBM InfiniBand) § #3 We are working with Fujitsu 7. Stampede (Dell InfiniBand) and U. Tokyo to help them 8. Tianhe-1A (NUDT Proprietary) support MPICH 3.0 on the K Computer (and its successor) 9. Fermi (IBM BG/Q) § #10 IBM has been working with 10. DARPA Trial Subset (IBM PERCS) us to get the PERCS plaorm to use MPICH (the system was just a li^le too early) Pavan Balaji, Argonne Na1onal Laboratory MPICH-3.0 (and MPI-3) § MPICH-3.0 is the new MPICH2 :-) – Released mpich-3.0rc1 this morning! – Primary focus of this release is to support MPI-3 – Other features are also included (such as support for nave atomics with ARM-v7) § A large number of MPI-3 features included – Non-blocking collecves – Improved MPI one-sided communicaon (RMA) – New Tools Interface – Shared memory communicaon support – (please see the MPI-3 BoF on
    [Show full text]
  • Spack Package Repositories
    Managing HPC Software Complexity with Spack Full-day Tutorial 1st Workshop on NSF and DOE High Performance Computing Tools July 10, 2019 The most recent version of these slides can be found at: https://spack.readthedocs.io/en/latest/tutorial.html Eugene, Oregon LLNL-PRES-806064 This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. github.com/spack/spack Lawrence Livermore National Security, LLC Tutorial Materials Materials: Download the latest version of slides and handouts at: bit.ly/spack-tutorial Slides and hands-on scripts are in the “Tutorial” Section of the docs. § Spack GitHub repository: http://github.com/spack/spack § Spack Documentation: http://spack.readthedocs.io Tweet at us! @spackpm 2 LLNL-PRES-806064 github.com/spack/spack Tutorial Presenters Todd Gamblin Greg Becker 3 LLNL-PRES-806064 github.com/spack/spack Software complexity in HPC is growing glm suite-sparse yaml-cpp metis cmake ncurses parmetis pkgconf nalu hwloc libxml2 xz bzip2 openssl boost trilinos superlu openblas netlib-scalapack mumps openmpi zlib netcdf hdf5 matio m4 libsigsegv parallel-netcdf Nalu: Generalized Unstructured Massively Parallel Low Mach Flow 4 LLNL-PRES-806064 github.com/spack/spack Software complexity in HPC is growing adol-c automake autoconf perl glm suite-sparse yaml-cpp metis cmake ncurses gmp libtool parmetis pkgconf m4 libsigsegv nalu hwloc libxml2 xz bzip2 openssl p4est pkgconf boost hwloc libxml2 trilinos superlu openblas xz netlib-scalapack
    [Show full text]
  • Code That Performs
    PRODUCT BRIEF High-Performance Computing Intel® Parallel Studio XE 2020 Software Code that Performs Intel® Parallel Studio XE helps developers take their HPC, enterprise, AI, and cloud applications to the max—with fast, scalable, and portable parallel code Intel® Parallel Studio XE is a comprehensive suite of development tools that make it fast and easy to build modern code that gets every last ounce of performance out of the newest Intel® processors. This tool-packed suite simplifies creating code with the latest techniques in vectorization, multi-threading, multi-node, and memory optimization. Get powerful, consistent programming with Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions for Intel® Xeon® Scalable processors, plus support for the latest standards and integrated development environments (IDEs). Who Needs It? • C, C++, Fortran, and Python* software developers and architects building HPC, enterprise, AI, and cloud solutions • Developers looking to maximize their software’s performance on current and future Intel® platforms What it Does • Creates faster code1. Boost application performance that scales on current and future Intel® platforms with industry-leading compilers, numerical libraries, performance profilers, and code analyzers. • Builds code faster. Simplify the process of creating fast, scalable, and reliable parallel code. • Delivers Priority Support. Connect directly to Intel’s engineers for confidential answers to technical questions, access older versions of the products, and receive free updates for a year. Paid license required. What’s New • Speed artificial intelligence inferencing. Intel® Compilers, Intel® Performance Libraries and analysis tools support Intel® Deep Learning Boost, which includes Vector Neural Network Instructions (VNNI) in 2nd generation Intel® Xeon® Scalable processors (codenamed Cascade Lake/AP platforms) • Develop for large memories of up to 512GB DIMMs with Persistence.
    [Show full text]