Intel Tuning for JURECA and JUWELS

Total Page:16

File Type:pdf, Size:1020Kb

Intel Tuning for JURECA and JUWELS Intel Tuning for Jurecaand JuWELS Dr. Heinrich Bockhorst – Intel Date: 29.11.2019 Agenda • Introduction • Processor Architecture Overview • Composer XE – Compiler • Intel Python • APS – Application Performance Snapshot • MPI and ITAC analysis • VTune Amplifier XE - analysis • Advisor XE - Vectorization • Selected Intel® Tools • References © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Processor Architecture Overview The “Free Lunch” is over, really Processor clock rate growth halted around 2005 Source: © 2014, James Reinders, Intel, used with permission Software must be parallelized to realize all the potential performance © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. What platform should I use for code modernization? Today (Juwels) The world is going Intel® Xeon® Processor Intel® Xeon Phi™ x100 Intel® Xeon® Processor Intel® Xeon Phi™ x200 parallel – stick E5-2600 v3 Product Product Family E5-2600 v4 Product Product Family with sequential Family formerly formerly codenamed Family codenamed codenamed … code and you will codenamed Knights Broadwell Knights Skylake fall behind. Haswell Corner Landing Cores 18 61 22 72 28 Threads/Core 2 4 2 4 2 Vector Width 256-bit 512-bit 256-bit 512-bit (x2) 512-bit (x2) Peak Memory Bandwidth 68 GB/s 352 GB/s 77 GB/s >500 GB/s 228 GB/s Both Xeon and KNL are suitable platforms; KNL provides higher scale & memory bandwidth. © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Intel® Parallel Studio XE Intel® Parallel Studio XE Intel® Inspector Intel® Advisor Vectorization Optimization and Thread Prototyping Memory and Threading Checking Intel® Cluster Checker Cluster Diagnostic Expert System Architecture Intel® VTune™ Amplifier Performance Profiler Intel® Trace Analyzer and Collector MPI Profiler Profiling, Analysis, and Cluster Cluster Tools Intel® Data Analytics Acceleration Library Intel® MPI Library Optimized for Data Analytics & Machine Learning Intel® Integrated Performance Primitives Image, Signal, and Compression Routines Libraries Performance Performance Intel® Math Kernel Library Intel® Threading Building Blocks Optimized Routines for Science, Engineering, and Financial Task-Based Parallel C++ Template Library Intel® C/C++ and Fortran Compilers Intel® Distribution for Python Performance Scripting © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Intel® CompoSERXE Common OptimizationOptions Windows* Linux*, OS* X Disable optimization /Od -O0 Optimize for speed (no code size increase) /O1 -O1 Optimize for speed (default) /O2 -O2 High-level loop optimization /O3 -O3 Create symbols for debugging /Zi -g Multi-file inter-procedural optimization /Qipo -ipo Profile guided optimization (multi-step build) /Qprof-gen -prof-gen /Qprof-use -prof-use Optimize for speed across the entire program /fast -fast same as: /O3 /Qipo same as: (“prototype switch”) /Qprec-div-, /fp:fast=2 Linux: -ipo –O3 -no-prec-div –static –fp- fast options definitions changes over time! /QxHost) model fast=2 -xHost) OS X: -ipo -mdynamic-no-pic -O3 -no-prec- div -fp-model fast=2 -xHost OpenMP support /Qopenmp -qopenmp Automatic parallelization /Qparallel -parallel © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Vectorization • Single Instruction Multiple Data (SIMD): • Processing vector with a single operation • Provides data level parallelism (DLP) • Because of DLP more efficient than scalar processing • Vector: • Consists of more than one element • Elements are of same scalar data types (e.g. floats, integers, …) • Vector length (VL): Elements of the vector A B A B AAi i BBi i Ai i Bi i + Scalar Vector + Processing Processing Ci C CCi Ci i VL © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Many Ways to Vectorize • Compiler: Ease of use Auto-vectorization (no change of code) Compiler: Auto-vectorization hints (#pragma vector, …) Compiler: Intel® Cilk™ Plus Array Notation Extensions SIMD intrinsic class (e.g.: F32vec, F64vec, …) Vector intrinsic (e.g.: _mm_fmadd_pd(…), _mm_add_ps(…), …) Assembler code (e.g.: [v]addps, [v]addss, …) Programmer control © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Basic Vectorization Switches I • Linux*, OS X*: -x<feature> ▪ Might enable Intel processor specific optimizations ▪ Processor-check added to “main” routine: Application errors in case SIMD feature missing or non-Intel processor with appropriate/informative message ▪ Example: -xCORE-AVX2 (Jureca Xeon HSW) ▪ Example: -xMIC-AVX512 (Jureca Booster KNL) ▪ Example: -xCORE-AVX512 (Juwels Xeon SKL) • Linux*, OS X*: -ax<features> • Multiple code paths: baseline and optimized/processor-specific • Multiple SIMD features/paths possible, e.g.: -axSSE2,CORE-AVX512 • Baseline code path defaults to –xSSE2 © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Basic Vectorization Switches II • Special switch for Linux*, OS X*: -xHost ▪ Compiler checks SIMD features of current host processor (where built on) and makes use of latest SIMD feature available ▪ Code only executes on processors with same SIMD feature or later as on build host ▪ As for -x<feature> if “main” routine is built with –xHost the final executable only runs on Intel processors © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Basic Vectorization Switches III • Special switch in addition to CORE-AVX512: -qopt-zmm-usage=[keyword] ▪ [keyword] = [high | low] ; Note: “low” is the default ▪ Why choosing a defensive vectorization level? Frequency drops in vectorized parts. Frequency does not immediately increases after the vectorized loop. Too many small vectorized loops will decrease the performance for the serial part. © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. Control Vectorization • Verify vectorization: ▪ Globally: Linux*, OS X*: -qopt-report[n] check for additional options (man icc)! ▪ Annotated source listing: -qopt-report-annotate=[text | html] generates source listing with compiler comments • Advanced: • Ignore vector dependencies (IVDEP): C/C++: #pragma ivdep Fortran: !DIR$ IVDEP • “Enforce” vectorization: C/C++: #pragma omp simd Fortran: !$omp simd © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice. NextGENCompiler • Compiler will be based on LLVM in the future. Can be enabled in latest intel compiler by: -qnextgen • More information: https://software.intel.com/en-us/articles/early-documentation-for-intel-c-compiler-based-on-the-modern- llvm-framework © 2017 Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. For more complete information about compiler optimizations, see our Optimization Notice.
Recommended publications
  • Beyond BIOS Developing with the Unified Extensible Firmware Interface
    Digital Edition Digital Editions of selected Intel Press books are in addition to and complement the printed books. Click the icon to access information on other essential books for Developers and IT Professionals Visit our website at www.intel.com/intelpress Beyond BIOS Developing with the Unified Extensible Firmware Interface Second Edition Vincent Zimmer Michael Rothman Suresh Marisetty Copyright © 2010 Intel Corporation. All rights reserved. ISBN 13 978-1-934053-29-4 This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in professional services. If professional advice or other expert assistance is required, the services of a competent professional person should be sought. Intel Corporation may have patents or pending patent applications, trademarks, copyrights, or other intellectual property rights that relate to the presented subject matter. The furnishing of documents and other materials and information does not provide any license, express or implied, by estoppel or otherwise, to any such patents, trademarks, copyrights, or other intellectual property rights. Intel may make changes to specifications, product descriptions, and plans at any time, without notice. Fictitious names of companies, products, people, characters, and/or data mentioned herein are not intended to represent any real individual, company, product, or event. Intel products are not intended for use in medical, life saving, life sustaining, critical control or safety systems, or in nuclear facility applications. Intel, the Intel logo, Celeron, Intel Centrino, Intel NetBurst, Intel Xeon, Itanium, Pentium, MMX, and VTune are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
    [Show full text]
  • Intel Advisor for Dgpu Intel® Advisor Workflows
    Profile DPC++ and GPU workload performance Intel® VTune™ Profiler, Advisor Vladimir Tsymbal, Technical Consulting Engineer, Intel, IAGS Agenda • Introduction to GPU programming model • Overview of GPU Analysis in Intel® VTune Profiler • Offload Performance Tuning • GPU Compute/Media Hotspots • A DPC++ Code Sample Analysis Demo • Using Intel® Advisor to increase performance • Offload Advisor discrete GPUs • GPU Roofline for discrete GPUs Copyright © 2020, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. 2 Intel GPUs and Programming Model Gen9 Application Workloads • Most common Optimized Middleware & Frameworks in mobile, desktop and Intel oneAPI Product workstations Intel® Media SDK Direct Direct API-Based Gen11 Programming Programming Programming • Data Parallel Mobile OpenCL platforms with C API C++ Libraries Ice Lake CPU Gen12 Low-Level Hardware Interface • Intel Xe-LP GPU • Tiger Lake CPU Copyright © 2020, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. 3 GPU Application Analysis GPU Compute/Media Hotspots • Visibility into both host and GPU sides • HW-events based performance tuning methodology • Provides overtime and aggregated views GPU In-kernel Profiling • GPU source/instruction level profiling • SW instrumentation • Two modes: Basic Block latency and memory access latency Identify GPU occupancy and which kernel to profile. Tune a kernel on a fine grain level Copyright © 2020, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. 4 GPU Analysis: Aggregated and Overtime Views Copyright © 2020, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others.
    [Show full text]
  • Hands-On Intel® Software Development & Oneapi WORKSHOP
    Hands-on Intel® Software Development & oneAPI WORKSHOP May 26-27, 2020 Scandic Solli, Parkveien 68 Box 2458 Solli, 0202 Oslo AGENDA DAY 1 - Technical Computing & Developer Tools - May 26 Timing Sessions 08:30 – 09:00 Registration & Light breakfast Part 1: Coding for maximum performance using the new Intel® Parallel Studio XE 2020 A refresher on the Intel® Hardware Architecture for Software Developers and Architects This session will offer in-depth insights into the current and future Intel® hardware platforms tailored to the 09:00 -09:45 needs of software developers, software architects, HPC and AI experts. We will cover the latest Intel® processors and the future Intel® GPU architecture. Developing code for Intel® architecture: how to achieve maximum performance using the new Intel® Parallel Studio XE 2020 09:45 – 10:30 Learn how Intel® Software Development Tools will help you to achieve optimal performance in your High Performance Computing, Artificial Intelligence ,and IoT projects. Includes a look at the new Intel® Parallel Studio XE 2020 tools which are designed to take advantage of the latest generation of Intel processors. 10:30 – 11:00 Coffee Break How to optimize and maximize code performance Learn how to use some of the advanced features of Intel® VTune™ Amplifier profile your applications. See how you can use event-based and architectural analysis to fine-tune your code so that it is taking full 11:00 – 12:00 advantage of the latest processor features of the target CPU. Learn how to use Intel Advisor, a powerful tool for tracking down and solving vectorization problems. In this session we will demonstrate how the Intel Advisor vector analysis and associated Roofline Model can be used to identify and help fixing vectorization problems.
    [Show full text]
  • Evaluating Techniques for Parallelization Tuning in MPI, Ompss and MPI/Ompss
    Evaluating techniques for parallelization tuning in MPI, OmpSs and MPI/OmpSs Advisors: Author: Prof. Jesús Labarta Vladimir Subotic´ Prof. Mateo Valero Prof. Eduard Ayguade´ A THESIS SUBMITTED IN FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor per la Universitat Politècnica de Catalunya Departament d’Arquitectura de Computadors Barcelona, 2013 i Abstract Parallel programming is used to partition a computational problem among multiple processing units and to define how they interact (communicate and synchronize) in order to guarantee the correct result. The performance that is achieved when executing the parallel program on a parallel architec- ture is usually far from the optimal: computation unbalance and excessive interaction among processing units often cause lost cycles, reducing the efficiency of parallel computation. In this thesis we propose techniques oriented to better exploit parallelism in parallel applications, with especial emphasis in techniques that increase asynchronism. Theoretically, this type of parallelization tuning promises multiple benefits. First, it should mitigate communication and synchro- nization delays, thus increasing the overall performance. Furthermore, parallelization tuning should expose additional parallelism and therefore increase the scalability of execution. Finally, increased asynchronism would allow more flexible communication mechanisms, providing higher toler- ance to slower networks and external noise. In the first part of this thesis, we study the potential for tuning MPI par- allelism. More specifically, we explore automatic techniques to overlap communication and computation. We propose a speculative messaging technique that increases the overlap and requires no changes of the orig- inal MPI application. Our technique automatically identifies the applica- tion’s MPI activity and reinterprets that activity using optimally placed non-blocking MPI requests.
    [Show full text]
  • Introduction to Intel Performance Tools Part
    Introduction to Intel Performance Tools Part 1/2 Doug Roberts SHARCNET / COMPUTE CANADA Intel® Performance Tools o Intel Advisor - Optimize Vectorization and Thread Prototyping for C, C++, Fortran o Intel Inspector - Easy-to-use Memory and Threading Error Debugger for C, C++, Fortran o Intel Vtune Amplifier - Serial/Threaded Performance Profiler for C, C++, Fortran, Mixed Python o Intel Trace Analyzer and Collector - Understand MPI application behavior for C, C++, Fortran, OpenSHMEM o Intel Distribution for Python - High-performance Python powered by native Intel Performance Libraries Intel® Parallel Studio XE – Cluster Edition https://software.intel.com/en-us/parallel-studio-xe o Intel Advisor* https://software.intel.com/en-us/intel-advisor-xe o Intel Inspector* https://software.intel.com/en-us/intel-inspector-xe o Intel Vtune Amplifier* https://software.intel.com/en-us/intel-vtune-amplifier-xe o Intel Trace Analyzer and Collector* https://software.intel.com/en-us/intel-trace-analyzer o Intel Distribution for Python https://software.intel.com/en-us/distribution-for-python * Product Support → Training, Docs, Faq, Code Samples Initializating the Components – The Intel Way ssh graham.sharcnet.ca cd /opt/software/intel/18.0.1/parallel_studio_xe_2018.1.038 source psxevars.sh → linux/bin/compilervars.sh → clck_2018/bin/clckvars.sh → itac_2018/bin/itacvars.sh → inspector_2018/inspxe-vars.sh → vtune_amplifier_2018/amplxe-vars.sh → advisor_2018/advixe-vars.sh Examples ls /opt/software/intel/18.0.1/parallel_studio_xe_2018.1.038/samples_2018/en
    [Show full text]
  • Intel® Software Products Highlights and Best Practices
    Intel® Software Products Highlights and Best Practices Edmund Preiss Business Development Manager Entdecken Sie weitere interessante Artikel und News zum Thema auf all-electronics.de! Hier klicken & informieren! Agenda • Key enhancements and highlights since ISTEP’11 • Industry segments using Intel® Software Development Products • Customer Demo and Best Practices Copyright© 2012, Intel Corporation. All rights reserved. 2 *Other brands and names are the property of their respective owners. Key enhancements & highlights since ISTEP’11 3 All in One -- Intel® Cluster Studio XE 2012 Analysis & Correctness Tools Shared & Distributed Memory Application Development Intel Cluster Studio XE supports: -Shared Memory Processing MPI Libraries & Tools -Distributed Memory Processing Compilers & Libraries Programming Models -Hybrid Processing Copyright© 2012, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Intel® VTune™ Amplifier XE New VTune Amplifier XE features very well received by Software Developers Key reasons : • More intuitive – Improved GUI points to application inefficiencies • Preconfigured & customizable analysis profiles • Timeline View highlights concurrency issues • New Event/PC counter ratio analysis concept easy to grasp Copyright© 2012, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Intel® VTune™ Amplifier XE The Old Way versus The New Way The Old Way: To see if there is an issue with branch misprediction, multiply event value (86,400,000) by 14 cycles, then divide by CPU_CLK_UNHALTED.THREAD (5,214,000,000). Then compare the resulting value to a threshold. If it is too high, investigate. The New Way: Look at the Branch Mispredict metric, and see if any cells are pink.
    [Show full text]
  • Intel Edge Computing Portfolio E-Booklet
    Intel Edge Computing Portfolio e-Booklet Learn how Intel products, developer tools, programs, and ecosystem solutions accelerate the development and deployment of edge computing solutions 2021 Febuary Release Legal Notices and Disclaimers Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer to learn more. Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction. Other names and brands may be claimed as the property of others. Any third-party information referenced on this document is provided for information only. Intel does not endorse any specific third-party product or entity mentioned on this document. Intel, the Intel Logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Intel Statement on Product Usage Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human right. © Intel Corporation. Contents Unmatched
    [Show full text]
  • Intel® Inspector XE 2013
    MEMORY AND THREAD DEBUGGER Product Brief Intel® Inspector XE 2013 Top Features Deliver More Reliable Applications . Inspect C, C++, C# and Fortran Intel® Inspector XE 2013 is an easy to use dynamic memory and threading error detector for . No special builds required. Use Windows* and Linux*. Enhance productivity, cut cost and speed time-to-market. your normal compiler and build. Find memory and threading defects early in the development cycle. The earlier an error is . Inspects all code, even if the found, the lower the cost. Intel Inspector XE makes it easy to find and diagnose errors early. source is unavailable . Highlights the error at multiple Find errors that traditional regression testing and static analysis miss. Intel Inspector XE source code locations, provides finds latent errors on the executed code path plus intermittent and non-deterministic errors, corresponding call stacks even if the error-causing timing scenario does not happen. New! Debugger breakpoints for easier diagnosis of difficult bugs Memory Errors Threading Errors . New! Heap growth analysis finds . Memory leaks . Data races cause of heap growth in a . Memory corruption and Illegal Accesses - Heap races problematic region . Allocation / de-allocation API mismatches - Stack races . Inconsistent memory API usage . Deadlocks “We struggled for a week with a crash situation, the corruption was C, C++, C# and Fortran. Or any mix. Is your GUI in C# with performance sensitive code in C++? identified but the source was Got legacy code in Fortran? Using libraries without the source? No problem, it all works. really hard to find. Then we ran Dynamic instrumentation enables inspection of all code including third party libraries where the Intel® Inspector XE and source is not available.
    [Show full text]
  • Driving Performance with Intel® Advisor's Flow Graph Analyzer
    Software THE PARALLEL UNIVERSE Driving Code Performance with Intel® Advisor’s Flow Graph Analyzer Modernize your Code with Issue Intel® Parallel Studio XE 30 Enabling FPGAs for Software Developers 2017 CONTENTS Letter from the Editor 3 Meet Intel® Parallel Studio XE 2018 by Henry A. Gabb, Senior Principal Engineer, Intel Corporation Driving Performance with Intel® Advisor’s Flow Graph Analyzer 5 by Vasanth Tovinkere, Architect, Intel® Flow Graph Analyzer; Pablo Reble, Software Engineer; Farshad Akhbari, Perceptual Computing Technical Lead; and Palanivel Guruvareddiar, Perceptual Computing Software Architect; Intel Corporation FEATURE Welcome to the Adult World, OpenMP* 19 by Barbara Chapman, Professor, Stony Brook University, and Director of Computer Science and Mathematics, Brookhaven National Laboratory Enabling FPGAs for Software Developers 25 by Bernhard Friebe, Senior Director of FPGA Software Solutions Marketing, Intel Corporation, and James Reinders, HPC Enthusiast Modernize Your Code for Performance, Portability, and Scalability 37 by Jackson Marusarz, Technical Consulting Engineer, Intel Corporation Dealing with Outliers 45 by Oleg Kremnyov, QA Engineer; Mikhail Averbukh, Software Engineer; and Ivan Kuzmin, Software Engineering Manager; Intel Corporation Tuning for Success with the LatestSIMD Extensions 57 and Intel® Advanced Vector Extensions 512 by Xinmin Tian, Senior Principal Engineer; Hideki Saito, Principal Engineer; Sergey Kozhukhov, Senior Staff Engineer; and Nikolay Panchenko, Staff Engineer; Intel Compiler and Language Lab, Intel Corporation Effectively Using Your Whole Cluster 77 by Rama Kishan Malladi, Technical Marketing Engineer, Intel Corporation Is Your Cluster Healthy? 85 by Brock A. Taylor, HPC Solution Architect, Intel Corporation Optimizing HPC Clusters 89 by Michael Hebenstreit, Data Center Engineer, Intel Corporation For more complete information about compiler optimizations, see our Optimization Notice.
    [Show full text]
  • Intel® Parallel Studio XE 2013 SP1 Product Brief
    EFFICIENTLY PRODUCE FAST, SCALABLE, AND RELIABLE APPLICATIONS Intel® Parallel Studio XE 2013 SP1 Product Brief Top Features Leading Development Tools for Top Performance . Industry-leading application Deliver top application performance while minimizing development, tuning and testing time, and performance that scales as effort. Intel® Parallel Studio XE provides C/C++ and Fortran developers cutting edge performing processor core count and vector compilers and libraries, the right parallel programming models, and complementary and width increase compatible analysis tools. It plugs seamlessly into Microsoft Visual Studio* and the GNU tool chain to keep you productive while preserving your development environment investment. Efficiently scale on tomorrow’s Boost performance for your applications as they run on today’s and tomorrow’s IA-compatible hardware while preserving processors and coprocessors, including Intel® Xeon® processors and Intel® Xeon Phi™ investment in existing code coprocessors. Compatible with leading development environments Intel Parallel Studio XE includes the next generation of software development tools: . Intel® C, C++ and Fortran Compilers – Industry-Leading Compilers “If you need performance for your . Intel® Math Kernel Library (Intel® MKL) and Intel® Integrated Performance Primitives C++ and/or Fortran applications, (Intel® IPP) – Performance Libraries you need to try Intel software . Intel® Threading Building Blocks (Intel® TBB) and Intel® Cilk™ Plus – Parallel Programming development products." Models Dr. Artur Guzik, Senior Engineer, Neubrex. Co., . Intel® Advisor XE – Threading Prototyping Tool Ltd. Intel® VTune™ Amplifier XE – Advanced Threading and Performance Profiler . Intel® Inspector XE – Memory and Threading Debugger "The new VTune Amplifier XE Get more performance with less effort. Optimize performance by using the latest versions of brings even more capability to an broadly used Intel® Software Development Products.
    [Show full text]
  • Code That Performs
    PRODUCT BRIEF High-Performance Computing Intel® Parallel Studio XE 2020 Software Code that Performs Intel® Parallel Studio XE helps developers take their HPC, enterprise, AI, and cloud applications to the max—with fast, scalable, and portable parallel code Intel® Parallel Studio XE is a comprehensive suite of development tools that make it fast and easy to build modern code that gets every last ounce of performance out of the newest Intel® processors. This tool-packed suite simplifies creating code with the latest techniques in vectorization, multi-threading, multi-node, and memory optimization. Get powerful, consistent programming with Intel® Advanced Vector Extensions 512 (Intel® AVX-512) instructions for Intel® Xeon® Scalable processors, plus support for the latest standards and integrated development environments (IDEs). Who Needs It? • C, C++, Fortran, and Python* software developers and architects building HPC, enterprise, AI, and cloud solutions • Developers looking to maximize their software’s performance on current and future Intel® platforms What it Does • Creates faster code1. Boost application performance that scales on current and future Intel® platforms with industry-leading compilers, numerical libraries, performance profilers, and code analyzers. • Builds code faster. Simplify the process of creating fast, scalable, and reliable parallel code. • Delivers Priority Support. Connect directly to Intel’s engineers for confidential answers to technical questions, access older versions of the products, and receive free updates for a year. Paid license required. What’s New • Speed artificial intelligence inferencing. Intel® Compilers, Intel® Performance Libraries and analysis tools support Intel® Deep Learning Boost, which includes Vector Neural Network Instructions (VNNI) in 2nd generation Intel® Xeon® Scalable processors (codenamed Cascade Lake/AP platforms) • Develop for large memories of up to 512GB DIMMs with Persistence.
    [Show full text]
  • Intel® Technology Journal | Volume 18, Issue 2, 2014
    Intel® Technology Journal | Volume 18, Issue 2, 2014 Publisher Managing Editor Content Architect Cory Cox Stuart Douglas Paul Cohen Program Manager Technical Editor Technical Illustrators Paul Cohen David Clark MPS Limited Technical and Strategic Reviewers Bruce J Beare Tracey Erway Michael Richmond Judi A Goldstein Kira C Boyko Miao Wei Paul De Carvalho Ryan Tabrah Prakash Iyer Premanand Sakarda Ticky Thakkar Shelby E Hedrick Intel® Technology Journal | 1 Intel® Technology Journal | Volume 18, Issue 2, 2014 Intel Technology Journal Copyright © 2014 Intel Corporation. All rights reserved. ISBN 978-1-934053-63-8, ISSN 1535-864X Intel Technology Journal Volume 18, Issue 2 No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4744. Requests to the Publisher for permission should be addressed to the Publisher, Intel Press, Intel Corporation, 2111 NE 25th Avenue, JF3-330, Hillsboro, OR 97124-5961. E-Mail: [email protected]. Th is publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in professional services. If professional advice or other expert assistance is required, the services of a competent professional person should be sought.
    [Show full text]