Scalability-Driven Approaches to Key Aspects of the Message Passing Interface for Next Generation Supercomputing

Total Page:16

File Type:pdf, Size:1020Kb

Scalability-Driven Approaches to Key Aspects of the Message Passing Interface for Next Generation Supercomputing Scalability-Driven Approaches to Key Aspects of the Message Passing Interface for Next Generation Supercomputing by Ayi Judicael Zounmevo A thesis submitted to the Department of Electrical and Computer Engineering in conformity with the requirements for the degree of Doctor of Philosophy Queen’s University Kingston, Ontario, Canada May 2014 Copyright c Ayi Judicael Zounmevo, 2014 � Abstract The Message Passing Interface (MPI), which dominates the supercomputing programming environment, is used to orchestrate and fulfill communication in High Performance Computing (HPC). How far HPC programs can scale depends in large part on the ability to achieve fast communication; and to overlap communication with computation or communication with communication. This dissertation proposes a new asynchronous solution to the nonblocking Rendezvous protocol used between pairs of processes to transfer large payloads. On top of enforcing communication/computation overlapping in a comprehensive way, the proposal trumps existing network device-agnostic asynchronous solutions by being memory-scalable and by avoiding brute force strategies. Achieving overlapping between communication and computation is important; but each communication is also expected to generate minimal latency. In that respect, the processing of the queues meant to hold messages pending reception inside the MPI middleware is ex- pected to be fast. Currently though, that processing slows down when program scales grow. This research presents a novel scalability-driven message queue whose processing skips alto- gether large portions of queue items that are deterministically guaranteed to lead to unfruitful searches. For having little sensitivity to program sizes, the proposed message queue maintains a very good performance, on top of displaying a low andflattening memory footprint growth pattern. Due to the blocking nature of its required synchronizations, the one-sided communication model of MPI creates both communication/computation and communication/communication serializations. This researchfixes these issues and latency-related inefficiencies documented i for MPI one-sided communications by proposing completely nonblocking and non-serializing versions for those synchronizations. The improvements, meant for consideration in a future MPI standard, also allow new classes of programs to be more efficiently expressed in MPI. Finally, a persistent distributed service is designed over MPI to show its impacts at large scales beyond communication-only activities. MPI is analyzed in situations of resource exhaustion, partial failure and heavy use of internal objects for communicating and non- communicating routines. Important scalability issues are revealed and solution approaches are put forth. ii Acknowledgments Thanks to God for past and current graces, for the wisdom he offers for free; and for watching over me and my future. I am deeply indebted to my supervisor, Dr. Ahmad Afsahi, for his unfailing support and for constantly pushing me toward the completion of this research. My thanks go to him for his guidance for high quality research, understanding and praiseworthy patience. I am also indebted to Dr. Dries Kimpe, Dr. Pavan Balaji and Dr. Rob Ross of the U.S. Argonne National Laboratory, IL. I undeniably grew with their advice and insight during my Ph.D. studies. To the members of my thesis committee, Dr. Ying Zou, Dr. Ahmed Hassan, Dr. Michael Korenberg and Dr. Jesper Larsson Tr¨aff, thanks for the priceless feedback on this research. To my co-workers from the Parallel Processing Research lab, Dr. Ying Qian, Dr. Mo- hammad Rashti, Dr. Ryan Grant, Dr. Reza Zamani, Grigory Inozemtsev, Iman Faraji and Hessam Mirsadeghi, thanks for your support, the constructive discussions, the jokes and the laughters. The experience would have been lonely without you. To Ryan, additional thanks for those spontaneous and extremely helpful advice in my last year. To Iman and Hessam, thanks for now being two irreplaceable friends for life. Thanks to Xin Zhao of the University of Illinois Urbana Champaign for the collaboration and insightful discussions on MPI one-sided communications. Thanks to the Natural Science and Engineering Research Council of Canada (NSERC) for supporting this research through grants to my supervisor. Thanks to the Electrical and Computer Engineering (ECE) Department of Queen’s University as well as the school of grad- uate studies for thefinancial support. Thanks to the IT service and office staff of the ECE iii Department of Queen’s University. Special thanks to Ms. Debra Fraser and Ms. Kendra Pople-Easton for being so nice, so supportive and so available for me. My gratitude also goes to Pak Lui and the HPC Advisory Council for the supercomputing resources. Thanks to the Argonne Laboratory Computing Research Center for the supercomputing resources. To all my friends in Montreal, I cannot acknowledge enough your support. In alphabetical order, Narcisse Adjalla, Santos Adjalla, Sami Abike Yacoubou-Chabi Yo, Sani Chabi Yo, Flo- rent Kpodjedo, Ablan Joyce Nouaho and Fatou Tao, thank you. As for Florent, my improvised psychologist in difficult and lonely moments, additional thanks; you actually watched over me. Lastly, I would like to acknowledge my family for always believing in me. My gratitude goes to my brothers and sisters, Ida, Cyr, Herv´e,Eric, Lydia, Annick and Laure for their support and prayers. To my father Fabien and my mother Jeanne, you are the absolute mental strength that I carry with myself; I owe you so much. iv Table of Contents Abstract i Acknowledgments iii Table of Contents v List of Tables ix List of Figures x List of Equations xiii List of Code Listings xiv Acronyms xv Chapter 1: Introduction . ............. 1 1.1 Motivation . .................... 3 1.2 Problem Statement . ................. 7 1.3 Contributions . .................... 8 1.4 Dissertation Outline . 11 Chapter 2: Background . 13 2.1 Message Passing Interface (MPI) . 14 2.1.1 Datatypes and Derived Datatypes in MPI . 16 2.1.2 Two-sided Communications . 17 2.1.3 Collective Communications . 19 2.1.4 One-sided Communications . 19 2.1.5 The Nonblocking Communication Handling in MPI . 23 2.1.6 Generalized Requests and Info Objects . 24 2.1.7 Thread Levels . 25 2.2 High Performance Interconnects . 26 2.2.1 InfiniBand: Characteristics, Queueing Model and Semantics . 28 2.2.2 The Lower-level Messaging Middleware . 29 2.3 Summary . .................... 30 v Chapter 3: Scalable MPI Two-Sided Message Progression over RDMA . 31 3.1 Introduction to Rendezvous Protocols . 34 3.1.1 RDMA Write-based Rendezvous Protocol . 34 3.1.2 RDMA Read-based Rendezvous Protocol . 36 3.1.3 Synthesis of the RDMA-based Protocols and Existing Improvement Pro- posals . 36 3.1.4 Hardware-offloaded Two-sided Message Progression . 40 3.1.5 Host-based Asynchronous Message Progression Methods . 41 3.1.6 Summary of Related Work . 43 3.2 Scenario-conscious Asynchronous Rendezvous . 44 3.2.1 Design Objectives . 45 3.2.2 Asynchronous Execution Flows inside a Single Thread . 47 3.3 Experimental Evaluation . 49 3.3.1 Microbenchmark Results . 50 3.3.2 Application Results . 54 3.4 Summary . .................... 58 Chapter 4: Scalable Message Queues in MPI . 60 4.1 Related Work . .................... 62 4.2 Motivations . .................... 63 4.2.1 The Omnipresence of Message Queue Processing in MPI Communications 63 4.2.2 Performance and Scalability Concerns . 64 4.2.3 Rethinking Message Queues for Scalability by Leveraging MPI’s Very Characteristics . 67 4.3 A New Message Queue for Large Scales . 69 4.3.1 Reasoning about Dimensionality . 69 4.3.2 A Scalable Multidimensional MPI Message Sub-queue . 70 4.3.3 Receive Match Ordering Enforcement . 75 4.3.4 Size Threshold Heuristic for Sub-queue Structures . 76 4.4 Performance Analysis . 80 4.4.1 Runtime Complexity Analysis . 80 4.4.2 Memory Overhead Analysis . 90 4.4.3 Cache Behaviour . 93 4.5 Practical Justification of a New Message Queue Data Structure . 94 4.6 Experimental Evaluation . 98 4.6.1 Notes on Measurements . 99 4.6.2 Microbenchmark Results . 99 4.6.3 Application Results . 111 4.6.4 Message Queue Memory Behaviours at Extreme Scales . 116 4.7 Summary . .................... 119 Chapter 5: Nonblocking MPI One-sided Communication Synchronizations . 121 5.1 Related Work . .................... 123 5.2 The Burden of Blocking RMA Synchronization . 124 vi 5.2.1 The Inefficiency Patterns . 124 5.2.2 Serializations in MPI One-sided Communications . 126 5.3 Opportunistic Message Progression as Default Advantage for Nonblocking Syn- chronizations . 128 5.4 Impacts of Nonblocking RMA Epochs . 130 5.4.1 Fixing The Inefficiency Patterns . 131 5.4.2 Unlocking or Improving New Use Cases . 135 5.5 Nonblocking API . 138 5.6 Semantics, Correctness and Progress Engine Behaviours . 139 5.6.1 Semantics and Correctness . 140 5.6.2 Info Object Key-value Pairs and Aggressive Progression Rules . 145 5.7 Design and Realization Notes . 150 5.7.1 Separate and New Progress Engine . 150 5.7.2 Deferred Epochs and Epoch Recording . 152 5.7.3 Epoch Ids, Win traits and Epoch Matching . 153 5.7.4 Notes on Lock and Fence Epochs . 155 5.8 Experimental Evaluation . 156 5.8.1 Microbenchmark . 157 5.8.2 Unstructured Dynamic Application Pattern . 176 5.8.3 Application Test . 178 5.9 Summary . .................... 183 Chapter 6: MPI in Extreme-scale Persistent Services: Some Missing Scala- bility Features . 186 6.1 Related Work . .................... 188 6.2 Service Overview: The Distributed Storage . 188 6.3 Transport Requirements . 192 6.4 MPI as a Network API . 193 6.4.1 Connection/Disconnection . 194 6.4.2 The RPC Listener . 198 6.4.3 I/O Life Cycles . 199 6.4.4 Cancellation . 201 6.4.5 Client-server and Failure Behaviour . 206 6.4.6 Object Limits . 209 6.5 Some Missing Scalability Features . 210 6.5.1 Fault Tolerance . 210 6.5.2 Generalization of Nonblocking Operations . 212 6.5.3 Object Limits and Resource Exhaustion . 217 6.6 Summary .
Recommended publications
  • Petaflops for the People
    PETAFLOPS SPOTLIGHT: NERSC housands of researchers have used facilities of the Advanced T Scientific Computing Research (ASCR) program and its EXTREME-WEATHER Department of Energy (DOE) computing predecessors over the past four decades. Their studies of hurricanes, earthquakes, NUMBER-CRUNCHING green-energy technologies and many other basic and applied Certain problems lend themselves to solution by science problems have, in turn, benefited millions of people. computers. Take hurricanes, for instance: They’re They owe it mainly to the capacity provided by the National too big, too dangerous and perhaps too expensive Energy Research Scientific Computing Center (NERSC), the Oak to understand fully without a supercomputer. Ridge Leadership Computing Facility (OLCF) and the Argonne Leadership Computing Facility (ALCF). Using decades of global climate data in a grid comprised of 25-kilometer squares, researchers in These ASCR installations have helped train the advanced Berkeley Lab’s Computational Research Division scientific workforce of the future. Postdoctoral scientists, captured the formation of hurricanes and typhoons graduate students and early-career researchers have worked and the extreme waves that they generate. Those there, learning to configure the world’s most sophisticated same models, when run at resolutions of about supercomputers for their own various and wide-ranging projects. 100 kilometers, missed the tropical cyclones and Cutting-edge supercomputing, once the purview of a small resulting waves, up to 30 meters high. group of experts, has trickled down to the benefit of thousands of investigators in the broader scientific community. Their findings, published inGeophysical Research Letters, demonstrated the importance of running Today, NERSC, at Lawrence Berkeley National Laboratory; climate models at higher resolution.
    [Show full text]
  • Threads Chapter 4
    Threads Chapter 4 Reading: 4.1,4.4, 4.5 1 Process Characteristics ● Unit of resource ownership - process is allocated: ■ a virtual address space to hold the process image ■ control of some resources (files, I/O devices...) ● Unit of dispatching - process is an execution path through one or more programs ■ execution may be interleaved with other process ■ the process has an execution state and a dispatching priority 2 Process Characteristics ● These two characteristics are treated independently by some recent OS ● The unit of dispatching is usually referred to a thread or a lightweight process ● The unit of resource ownership is usually referred to as a process or task 3 Multithreading vs. Single threading ● Multithreading: when the OS supports multiple threads of execution within a single process ● Single threading: when the OS does not recognize the concept of thread ● MS-DOS support a single user process and a single thread ● UNIX supports multiple user processes but only supports one thread per process ● Solaris /NT supports multiple threads 4 Threads and Processes 5 Processes Vs Threads ● Have a virtual address space which holds the process image ■ Process: an address space, an execution context ■ Protected access to processors, other processes, files, and I/O Class threadex{ resources Public static void main(String arg[]){ ■ Context switch between Int x=0; processes expensive My_thread t1= new my_thread(x); t1.start(); ● Threads of a process execute in Thr_wait(); a single address space System.out.println(x) ■ Global variables are
    [Show full text]
  • Lecture 10: Introduction to Openmp (Part 2)
    Lecture 10: Introduction to OpenMP (Part 2) 1 Performance Issues I • C/C++ stores matrices in row-major fashion. • Loop interchanges may increase cache locality { … #pragma omp parallel for for(i=0;i< N; i++) { for(j=0;j< M; j++) { A[i][j] =B[i][j] + C[i][j]; } } } • Parallelize outer-most loop 2 Performance Issues II • Move synchronization points outwards. The inner loop is parallelized. • In each iteration step of the outer loop, a parallel region is created. This causes parallelization overhead. { … for(i=0;i< N; i++) { #pragma omp parallel for for(j=0;j< M; j++) { A[i][j] =B[i][j] + C[i][j]; } } } 3 Performance Issues III • Avoid parallel overhead at low iteration counts { … #pragma omp parallel for if(M > 800) for(j=0;j< M; j++) { aa[j] =alpha*bb[j] + cc[j]; } } 4 C++: Random Access Iterators Loops • Parallelization of random access iterator loops is supported void iterator_example(){ std::vector vec(23); std::vector::iterator it; #pragma omp parallel for default(none) shared(vec) for(it=vec.begin(); it< vec.end(); it++) { // do work with it // } } 5 Conditional Compilation • Keep sequential and parallel programs as a single source code #if def _OPENMP #include “omp.h” #endif Main() { #ifdef _OPENMP omp_set_num_threads(3); #endif for(i=0;i< N; i++) { #pragma omp parallel for for(j=0;j< M; j++) { A[i][j] =B[i][j] + C[i][j]; } } } 6 Be Careful with Data Dependences • Whenever a statement in a program reads or writes a memory location and another statement reads or writes the same memory location, and at least one of the two statements writes the location, then there is a data dependence on that memory location between the two statements.
    [Show full text]
  • The Problem with Threads
    The Problem with Threads Edward A. Lee Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2006-1 http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html January 10, 2006 Copyright © 2006, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Acknowledgement This work was supported in part by the Center for Hybrid and Embedded Software Systems (CHESS) at UC Berkeley, which receives support from the National Science Foundation (NSF award No. CCR-0225610), the State of California Micro Program, and the following companies: Agilent, DGIST, General Motors, Hewlett Packard, Infineon, Microsoft, and Toyota. The Problem with Threads Edward A. Lee Professor, Chair of EE, Associate Chair of EECS EECS Department University of California at Berkeley Berkeley, CA 94720, U.S.A. [email protected] January 10, 2006 Abstract Threads are a seemingly straightforward adaptation of the dominant sequential model of computation to concurrent systems. Languages require little or no syntactic changes to sup- port threads, and operating systems and architectures have evolved to efficiently support them. Many technologists are pushing for increased use of multithreading in software in order to take advantage of the predicted increases in parallelism in computer architectures.
    [Show full text]
  • Unlocking the Full Potential of the Cray XK7 Accelerator
    Unlocking the Full Potential of the Cray XK7 Accelerator ∗ † Mark D. Klein , and, John E. Stone ∗National Center for Supercomputing Application, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA †Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL, 61801, USA Abstract—The Cray XK7 includes NVIDIA GPUs for ac- [2], [3], [4], and for high fidelity movie renderings with the celeration of computing workloads, but the standard XK7 HVR volume rendering software.2 In one example, the total system software inhibits the GPUs from accelerating OpenGL turnaround time for an HVR movie rendering of a trillion- and related graphics-specific functions. We have changed the operating mode of the XK7 GPU firmware, developed a custom cell inertial confinement fusion simulation [5] was reduced X11 stack, and worked with Cray to acquire an alternate driver from an estimate of over a month for data transfer to and package from NVIDIA in order to allow users to render and rendering on a conventional visualization cluster down to post-process their data directly on Blue Waters. Users are able just one day when rendered locally using 128 XK7 nodes to use NVIDIA’s hardware OpenGL implementation which has on Blue Waters. The fully-graphics-enabled GPU state is many features not available in software rasterizers. By elim- inating the transfer of data to external visualization clusters, currently considered an unsupported mode of operation by time-to-solution for users has been improved tremendously. In Cray, and to our knowledge Blue Waters is presently the one case, XK7 OpenGL rendering has cut turnaround time only Cray system currently running in this mode.
    [Show full text]
  • This Is Your Presentation Title
    Introduction to GPU/Parallel Computing Ioannis E. Venetis University of Patras 1 Introduction to GPU/Parallel Computing www.prace-ri.eu Introduction to High Performance Systems 2 Introduction to GPU/Parallel Computing www.prace-ri.eu Wait, what? Aren’t we here to talk about GPUs? And how to program them with CUDA? Yes, but we need to understand their place and their purpose in modern High Performance Systems This will make it clear when it is beneficial to use them 3 Introduction to GPU/Parallel Computing www.prace-ri.eu Top 500 (June 2017) CPU Accel. Rmax Rpeak Power Rank Site System Cores Cores (TFlop/s) (TFlop/s) (kW) National Sunway TaihuLight - Sunway MPP, Supercomputing Center Sunway SW26010 260C 1.45GHz, 1 10.649.600 - 93.014,6 125.435,9 15.371 in Wuxi Sunway China NRCPC National Super Tianhe-2 (MilkyWay-2) - TH-IVB-FEP Computer Center in Cluster, Intel Xeon E5-2692 12C 2 Guangzhou 2.200GHz, TH Express-2, Intel Xeon 3.120.000 2.736.000 33.862,7 54.902,4 17.808 China Phi 31S1P NUDT Swiss National Piz Daint - Cray XC50, Xeon E5- Supercomputing Centre 2690v3 12C 2.6GHz, Aries interconnect 3 361.760 297.920 19.590,0 25.326,3 2.272 (CSCS) , NVIDIA Tesla P100 Cray Inc. DOE/SC/Oak Ridge Titan - Cray XK7 , Opteron 6274 16C National Laboratory 2.200GHz, Cray Gemini interconnect, 4 560.640 261.632 17.590,0 27.112,5 8.209 United States NVIDIA K20x Cray Inc. DOE/NNSA/LLNL Sequoia - BlueGene/Q, Power BQC 5 United States 16C 1.60 GHz, Custom 1.572.864 - 17.173,2 20.132,7 7.890 4 Introduction to GPU/ParallelIBM Computing www.prace-ri.eu How do
    [Show full text]
  • Modifiable Array Data Structures for Mesh Topology
    Modifiable Array Data Structures for Mesh Topology Dan Ibanez Mark S Shephard February 27, 2016 Abstract Topological data structures are useful in many areas, including the various mesh data structures used in finite element applications. Based on the graph-theoretic foundation for these data structures, we begin with a generic modifiable graph data structure and apply successive optimiza- tions leading to a family of mesh data structures. The results are compact array-based mesh structures that can be modified in constant time. Spe- cific implementations for finite elements and graphics are studied in detail and compared to the current state of the art. 1 Introduction An unstructured mesh simulation code relies heavily on multiple core capabili- ties to deal with the mesh itself, and the range of features available at this level constrain the capabilities of the simulation as a whole. As such, the long-term goal towards which this paper contributes is the development of a mesh data structure with the following capabilities: 1. The flexibility to deal with evolving meshes 2. A highly scalable implementation for distributed memory computers 3. The ability to represent any of the conforming meshes typically used by Finite Element (FE) and Finite Volume (FV) methods 4. Minimize memory use 5. Maximize locality of storage 6. The ability to parallelize work inside supercomputer nodes with hybrid architecture such as accelerators This paper focuses on the first five goals; the sixth will be the subject of a future publication. In particular, we present a derivation for a family of structures with these properties: 1 1.
    [Show full text]
  • CUDA Binary Utilities
    CUDA Binary Utilities Application Note DA-06762-001_v11.4 | September 2021 Table of Contents Chapter 1. Overview..............................................................................................................1 1.1. What is a CUDA Binary?...........................................................................................................1 1.2. Differences between cuobjdump and nvdisasm......................................................................1 Chapter 2. cuobjdump.......................................................................................................... 3 2.1. Usage......................................................................................................................................... 3 2.2. Command-line Options.............................................................................................................5 Chapter 3. nvdisasm.............................................................................................................8 3.1. Usage......................................................................................................................................... 8 3.2. Command-line Options...........................................................................................................14 Chapter 4. Instruction Set Reference................................................................................ 17 4.1. Kepler Instruction Set.............................................................................................................17
    [Show full text]
  • Lithe: Enabling Efficient Composition of Parallel Libraries
    BERKELEY PAR LAB Lithe: Enabling Efficient Composition of Parallel Libraries Heidi Pan, Benjamin Hindman, Krste Asanović [email protected] {benh, krste}@eecs.berkeley.edu Massachusetts Institute of Technology UC Berkeley HotPar Berkeley, CA March 31, 2009 1 How to Build Parallel Apps? BERKELEY PAR LAB Functionality: or or or App Resource Management: OS Hardware Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 Need both programmer productivity and performance! 2 Composability is Key to Productivity BERKELEY PAR LAB App 1 App 2 App sort sort bubble quick sort sort code reuse modularity same library implementation, different apps same app, different library implementations Functional Composability 3 Composability is Key to Productivity BERKELEY PAR LAB fast + fast fast fast + faster fast(er) Performance Composability 4 Talk Roadmap BERKELEY PAR LAB Problem: Efficient parallel composability is hard! Solution: . Harts . Lithe Evaluation 5 Motivational Example BERKELEY PAR LAB Sparse QR Factorization (Tim Davis, Univ of Florida) Column Elimination SPQR Tree Frontal Matrix Factorization MKL TBB OpenMP OS Hardware System Stack Software Architecture 6 Out-of-the-Box Performance BERKELEY PAR LAB Performance of SPQR on 16-core Machine Out-of-the-Box sequential Time (sec) Time Input Matrix 7 Out-of-the-Box Libraries Oversubscribe the Resources BERKELEY PAR LAB TX TYTX TYTX TYTX A[0] TYTX A[0] TYTX A[0] TYTX A[0] TYTX TYA[0] A[0] A[0]A[0] +Z +Z +Z +Z +Z +Z +Z +ZA[1] A[1] A[1] A[1] A[1] A[1] A[1]A[1] A[2] A[2] A[2] A[2] A[2] A[2] A[2]A[2]
    [Show full text]
  • Hardware Complexity and Software Challenges
    Trends in HPC: hardware complexity and software challenges Mike Giles Oxford University Mathematical Institute Oxford e-Research Centre ACCU talk, Oxford June 30, 2015 Mike Giles (Oxford) HPC Trends June 30, 2015 1 / 29 1 { 10? 10 { 100? 100 { 1000? Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Question ?! How many cores in my Dell M3800 laptop? Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Question ?! How many cores in my Dell M3800 laptop? 1 { 10? 10 { 100? 100 { 1000? Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Question ?! How many cores in my Dell M3800 laptop? 1 { 10? 10 { 100? 100 { 1000? Answer: 4 cores in Intel Core i7 CPU + 384 cores in NVIDIA K1100M GPU! Peak power consumption: 45W for CPU + 45W for GPU Mike Giles (Oxford) HPC Trends June 30, 2015 2 / 29 Top500 supercomputers Really impressive { 300× more capability in 10 years! Mike Giles (Oxford) HPC Trends June 30, 2015 3 / 29 My personal experience 1982: CDC Cyber 205 (NASA Langley) 1985{89: Alliant FX/8 (MIT) 1989{92: Stellar (MIT) 1990: Thinking Machines CM5 (UTRC) 1987{92: Cray X-MP/Y-MP (NASA Ames, Rolls-Royce) 1993{97: IBM SP2 (Oxford) 1998{2002: SGI Origin (Oxford) 2002 { today: various x86 clusters (Oxford) 2007 { today: various GPU systems/clusters (Oxford) 2011{15: GPU cluster (Emerald @ RAL) 2008 { today: Cray XE6/XC30 (HECToR/ARCHER @ EPCC) 2013 { today: Cray XK7 with GPUs (Titan @ ORNL) Mike Giles (Oxford) HPC Trends June 30, 2015 4 / 29 Top500 supercomputers Power requirements are raising the question of whether this rate of improvement is sustainable.
    [Show full text]
  • Thread Scheduling in Multi-Core Operating Systems Redha Gouicem
    Thread Scheduling in Multi-core Operating Systems Redha Gouicem To cite this version: Redha Gouicem. Thread Scheduling in Multi-core Operating Systems. Computer Science [cs]. Sor- bonne Université, 2020. English. tel-02977242 HAL Id: tel-02977242 https://hal.archives-ouvertes.fr/tel-02977242 Submitted on 24 Oct 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Ph.D thesis in Computer Science Thread Scheduling in Multi-core Operating Systems How to Understand, Improve and Fix your Scheduler Redha GOUICEM Sorbonne Université Laboratoire d’Informatique de Paris 6 Inria Whisper Team PH.D.DEFENSE: 23 October 2020, Paris, France JURYMEMBERS: Mr. Pascal Felber, Full Professor, Université de Neuchâtel Reviewer Mr. Vivien Quéma, Full Professor, Grenoble INP (ENSIMAG) Reviewer Mr. Rachid Guerraoui, Full Professor, École Polytechnique Fédérale de Lausanne Examiner Ms. Karine Heydemann, Associate Professor, Sorbonne Université Examiner Mr. Etienne Rivière, Full Professor, University of Louvain Examiner Mr. Gilles Muller, Senior Research Scientist, Inria Advisor Mr. Julien Sopena, Associate Professor, Sorbonne Université Advisor ABSTRACT In this thesis, we address the problem of schedulers for multi-core architectures from several perspectives: design (simplicity and correct- ness), performance improvement and the development of application- specific schedulers.
    [Show full text]
  • Performance and Programmability Trade-Offs in the Opencl 2.0 SVM and Memory Model
    Performance and Programmability Trade-offs in the OpenCL 2.0 SVM and Memory Model Brian T. Lewis, Intel Labs Overview • This talk: – My experience working on the OpenCL 2.0 SVM & memory models – Observation: tension between performance and programmability – Programmability = productivity, ease-of-use, simplicity, error avoidance – For most programmers & architects today, performance is paramount • First, some background: why are GPUs programmed the way they are? – Discrete & integrated GPUs – GPU differences from CPUs – GPU performance considerations – GPGPU programming • OpenCL 2.0 and a few of its features, compromises, tradeoffs 3/2/2014 Trade-offs in OpenCL 2.0 SVM and Memory Model 2 A couple of comments first • These are my personal observations • OpenCL 2.0, and its SVM & memory model, are the work of many people – I’ve been impressed by the professionalism & care paid by Khronos OpenCL members – Disagreements often lead to new insights 3/2/2014 Trade-offs in OpenCL 2.0 SVM and Memory Model 3 GPUs: massive data-parallelism for modest energy • NVIDIA Tesla K40 discrete GPU: 4.3 TFLOPs, 235 Watts, $5,000 http://forum.beyond3d.com/showpost.php?p=1643034&postcount=107 3/2/2014 Trade-offs in OpenCL 2.0 SVM and Memory Model 4 Integrated CPU+GPU processors • More than 90% of processors shipping today include a GPU on die • Low energy use is a key design goal Intel 4th Generation Core Processor: “Haswell” AMD Kaveri APU http://www.geeks3d.com/20140114/amd-kaveri-a10-7850k-a10-7700k-and-a8-7600-apus-announced/ 4-core GT2 Desktop: 35 W package Desktop: 45-95 W package 2-core GT2 Ultrabook: 11.5 W package Mobile, embedded: 15 W package 3/2/2014 Trade-offs in OpenCL 2.0 SVM and Memory Model 5 Discrete & integrated processors • Different points in the performance-energy design space • 235W vs.
    [Show full text]