Mapping of Large Task Network on Manycore Architecture Karl-Eduard Berger

Total Page:16

File Type:pdf, Size:1020Kb

Mapping of Large Task Network on Manycore Architecture Karl-Eduard Berger Mapping of large task network on manycore architecture Karl-Eduard Berger To cite this version: Karl-Eduard Berger. Mapping of large task network on manycore architecture. Distributed, Par- allel, and Cluster Computing [cs.DC]. Université Paris Saclay (COmUE), 2015. English. NNT : 2015SACLV026. tel-01318824 HAL Id: tel-01318824 https://tel.archives-ouvertes.fr/tel-01318824 Submitted on 20 May 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. NNT : 2015SACLV026 THESE DE DOCTORAT DE L’UNIVERSITE PARIS-SACLAY PREPAREE A “ L’UNIVERSITE VERSAILLES SAINT-QUENTIN EN YVELINES ” ECOLE DOCTORALE N° 580 Sciences et technologies de l’information et de la communication Mathématiques et Informatique Par M. Karl-Eduard Berger Placement de graphes de tâches de grande taille sur architectures massivement multicoeur Thèse présentée et soutenue à CEA Saclay Nano-INNOV, le 8 décembre 2015 : Composition du Jury : M., El Baz, Didier Chargé de recherche CNRS, LAAS Rapporteur M, Galea, François Chercheur, CEA Encadrant CEA M, Le Cun, Bertrand Maitre de conférence PRISM Co-directeur de thèse Mme, Munier, Alix Professeur des universités, UPMC Examinatrice M, Nace, Dritan Professeur des universités HEUDIASYC Président M., Pellegrini, François Professeur des universités, LABRI Rapporteur M, Sirdey, Renaud Directeur de recherche, CEA Directeur de thèse Titre : Placement de graphes de tâches de grande taille sur architectures massivement multicoeurs Mots clés : Placement de tâches, recherche opérationnelle, architectures manycoeurs Résumé : Ce travail de thèse de doctorat est Cette comparaison sert de métrique d'évaluation dédié à l'étude d'un problème de placement de des placements des différentes méthodes tâches dans le domaine de la compilation proposées. Afin de résoudre à ce problème, trois d'applications pour des architectures algorithmes de placement ont été développés. massivement parallèles. Ce problème de Task-wise Placement et Subgraph-wise placement doit être résolu dans le respect de Placement s'appliquent dans des cas où les poids trois critères: les algorithmes doivent être des tâches et des arêtes sont unitaires. capables de traiter des applications de tailles Regret-based Approach est une heuristique de variables, ils doivent répondre aux contraintes construction progressive basée sur la théorie des de capacités des processeurs et prendre en jeux qui s’applique sur des graphes dans compte la topologie des architectures cibles. lesquels les poids des tâches et des arêtes sont Dans cette thèse, les tâches sont organisées en variables similairement aux valeurs qu'on peut réseaux de communication, modélisés sous retrouver dans des cas industriels. forme de graphes. Pour évaluer la qualité des Afin de vérifier la robustesse de l'algorithme, solutions produites par les algorithmes, les différents types de graphes de tâches de tailles placements obtenus sont comparés avec un variables ont été générés. placement aléatoire. Title : Mapping of large task networks on manycore architecture Keywords : Tasks mapping, Operational Research, manycore architecture Abstract: This Ph.D thesis is devoted to the In order to address this problem, three study of the mapping problem related to heuristics are proposed. They are able to solve massively parallel embedded architectures. a dataflow process network mapping problem, This problem has to be solved considering where a network of communicating tasks is three criteria: heuristics should be able to deal placed into a set of processors with limited with applications with various sizes, they must resource capacities, while minimizing the meet the constraints of capacities of processors overall communication bandwidth between and they have to take into account the target processors. Task-wise Placement and architecture topologies. In this thesis, tasks are Subgraph-wise Placement are applied on task organized in communication networks, graphs where weights of tasks and edges are modeled as graphs. In order to determine a way unitary set. Then, in a will to address problems of evaluating the efficiency of the developed that can be found in industrial cases, heuristics, mappings, obtained by the application cases are widen to tasks graphs heuristics, are compared to a random mapping. with tasks and edges weights values similar to This comparison is used as an evaluation those that can be found in the industry. A metric throughout this thesis. The existence of progressive construction heuristic named this metric is motivated by the fact that no Regret Based Approach, based on game theory, comparative heuristics can be found in the is proposed In order to check the strength of the literature at the time of writing of this thesis. algorithm; many types of task graphs with various sizes are generated. Université Paris-Saclay Espace Technologique / Immeuble Discovery Route de l’Orme aux Merisiers RD 128 / 91190 Saint-Aubin, France To Hélène, Stefan and Maximilien Berger, Ingeborg und Günter Berger, Mireille and Martial Labouise, Jeanne Clermon et Huguette Chaumereux, And my friend Kristina Guseva. 4 He who loves practice without theory is like the sailor who boards ship without a rudder and compass and never knows where he may cast. Leonardo de Vinci 5 6 Contents Acknowledgement 1 Publications 5 R´esum´e 7 Abstract 9 R´esum´eFran¸cais 11 Introduction 17 1 Context 25 1.1 Introduction.................................. 25 1.2 Moore’slaw .................................. 26 1.3 Embeddedsystems .............................. 28 1.3.1 Whatisanembeddedsystem?. 28 1.3.2 Massivelyparallelembeddedsystems . .. 29 1.4 Manycorearchitectures. 30 1.4.1 Parallelarchitectures . 30 1.4.2 Massively Parallel Processor Architectures (MPPA) . ..... 32 1.5 SDFandCSDFdataflowprogrammingmodels . 34 1.6 ΣCprogramminglanguageandcompilation . ... 36 1.6.1 ΣCprogramminglanguage. 36 1.6.2 An Example ΣC Application: Laplacian of an Image . 37 1.6.3 Thecompilationprocess . 39 1.7 FormalizationoftheDPNmappingproblem . ... 41 1.8 A metric to evaluate the quality of solutions . ...... 43 1.8.1 Approximation measure of Demange and Paschos . 43 1.8.2 Random-basedapproximationmetric . 44 1.9 GraphTheoryBackground. 45 1.9.1 SomeDefinitions ........................... 45 1.9.2 Breadth-First Traversal (BFT) algorithm . .. 47 1.9.3 Notionofaffinity ........................... 47 1.10Conclusion................................... 48 i ii CONTENTS 2 State of the Art of the Mapping Problem 49 2.1 Introduction.................................. 49 2.2 Theimportanceofmappingapplications . .... 50 2.3 Partitioningproblems. 51 2.3.1 Bipartitioningalgorithms. 52 2.3.2 Multilevelapproaches. 52 2.4 QuadraticAssignmentProblems(QAP) . 53 2.5 Mappingproblems .............................. 54 2.5.1 Two-phasesmappingheuristics . 54 2.5.2 One-phasemappingheuristics . 59 2.6 Solvers..................................... 62 2.6.1 Metis,Parmetis,hMetis,kMetis . 62 2.6.2 ScotchandPT-Scotch . 62 2.6.3 Jostle: Parallel Multi-Level Graph Partitioning Software ..... 63 2.6.4 Othersolvers ............................. 64 2.7 Discussion................................... 65 2.7.1 Overview ............................... 65 2.7.2 Ourwork ............................... 66 2.8 Conclusion................................... 66 3 Two Scalable Mapping Methods 69 3.1 Introduction.................................. 69 3.2 Subgraph-WisePlacement . 70 3.2.1 CreationofaSubgraphofTasks. 70 3.2.2 Subgraphtonodeaffinity . 72 3.2.3 Complexityofthealgorithm . 72 3.2.4 Conclusion............................... 73 3.3 Task-WisePlacement............................. 73 3.3.1 Distanceaffinity............................ 75 3.3.2 Themappingprocedure . 76 3.3.3 ComplexityoftheAlgorithm. 78 3.3.4 Conclusion............................... 78 3.4 Results..................................... 78 3.4.1 ExecutionPlatform. 78 3.4.2 Instances ............................... 79 3.4.3 Computationalresults . 79 3.5 Conclusion................................... 84 4 Regret-Based Mapping Method 85 4.1 Introduction.................................. 85 4.2 Gametheorybackground........................... 86 4.2.1 Overviewofgametheory. 86 4.2.2 Behavioralgametheory . 87 4.2.3 Decisiontheory ............................ 89 CONTENTS iii 4.3 Regrettheory ................................. 91 4.3.1 External regret, internal regret and swap regret . ...... 92 4.3.2 Formaldefinition ........................... 93 4.3.3 Useofregrettheoryintheliterature . 94 4.4 RegretBasedHeuristic ............................ 96 4.4.1 Task-Wise Placement behavior on non-unitary task graphs.... 96 4.4.2 Introductionofthetaskcostnotion . .. 97 4.4.3 A new task selection model based on regret theory . ... 97 4.4.4 Descriptionofthealgorithm . 98 4.4.5 Complexityofthealgorithm . 98 4.5 GRASPPrinciplesforRBA . 101 4.5.1 DefinitionoftheGRASPprocedure . 101 4.5.2 GRASPandRBA........................... 102 4.6 Applicationoftheheuristic . 102 4.6.1 Executionplatform . 102 4.6.2 Instances ............................... 102 4.6.3 Thenodelayout...........................
Recommended publications
  • Increasing Memory Miss Tolerance for SIMD Cores
    Increasing Memory Miss Tolerance for SIMD Cores ∗ David Tarjan, Jiayuan Meng and Kevin Skadron Department of Computer Science University of Virginia, Charlottesville, VA 22904 {dtarjan, jm6dg,skadron}@cs.virginia.edu ABSTRACT that use a single instruction multiple data (SIMD) organi- Manycore processors with wide SIMD cores are becoming a zation can amortize the area and power overhead of a single popular choice for the next generation of throughput ori- frontend over a large number of execution backends. For ented architectures. We introduce a hardware technique example, we estimate that a 32-wide SIMD core requires called “diverge on miss” that allows SIMD cores to better about one fifth the area of 32 individual scalar cores. Note tolerate memory latency for workloads with non-contiguous that this estimate does not include the area of any intercon- memory access patterns. Individual threads within a SIMD nection network among the MIMD cores, which often grows “warp” are allowed to slip behind other threads in the same supra-linearly with the number of cores [18]. warp, letting the warp continue execution even if a subset of To better tolerate memory and pipeline latencies, many- core processors typically use fine-grained multi-threading, threads are waiting on memory. Diverge on miss can either 1 increase the performance of a given design by up to a factor switching among multiple warps, so that active warps can of 3.14 for a single warp per core, or reduce the number of mask stalls in other warps waiting on long-latency events. warps per core needed to sustain a given level of performance The drawback of this approach is that the size of the regis- from 16 to 2 warps, reducing the area per core by 35%.
    [Show full text]
  • Tousimojarad, Ashkan (2016) GPRM: a High Performance Programming Framework for Manycore Processors. Phd Thesis
    Tousimojarad, Ashkan (2016) GPRM: a high performance programming framework for manycore processors. PhD thesis. http://theses.gla.ac.uk/7312/ Copyright and moral rights for this thesis are retained by the author A copy can be downloaded for personal non-commercial research or study This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given Glasgow Theses Service http://theses.gla.ac.uk/ [email protected] GPRM: A HIGH PERFORMANCE PROGRAMMING FRAMEWORK FOR MANYCORE PROCESSORS ASHKAN TOUSIMOJARAD SUBMITTED IN FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy SCHOOL OF COMPUTING SCIENCE COLLEGE OF SCIENCE AND ENGINEERING UNIVERSITY OF GLASGOW NOVEMBER 2015 c ASHKAN TOUSIMOJARAD Abstract Processors with large numbers of cores are becoming commonplace. In order to utilise the available resources in such systems, the programming paradigm has to move towards in- creased parallelism. However, increased parallelism does not necessarily lead to better per- formance. Parallel programming models have to provide not only flexible ways of defining parallel tasks, but also efficient methods to manage the created tasks. Moreover, in a general- purpose system, applications residing in the system compete for the shared resources. Thread and task scheduling in such a multiprogrammed multithreaded environment is a significant challenge. In this thesis, we introduce a new task-based parallel reduction model, called the Glasgow Parallel Reduction Machine (GPRM).
    [Show full text]
  • Multi-Core Processors and Systems: State-Of-The-Art and Study of Performance Increase
    Multi-Core Processors and Systems: State-of-the-Art and Study of Performance Increase Abhilash Goyal Computer Science Department San Jose State University San Jose, CA 95192 408-924-1000 [email protected] ABSTRACT speedup. Some tasks are easily divided into parts that can be To achieve the large processing power, we are moving towards processed in parallel. In those scenarios, speed up will most likely Parallel Processing. In the simple words, parallel processing can follow “common trajectory” as shown in Figure 2. If an be defined as using two or more processors (cores, computers) in application has little or no inherent parallelism, then little or no combination to solve a single problem. To achieve the good speedup will be achieved and because of overhead, speed up may results by parallel processing, in the industry many multi-core follow as show by “occasional trajectory” in Figure 2. processors has been designed and fabricated. In this class-project paper, the overview of the state-of-the-art of the multi-core processors designed by several companies including Intel, AMD, IBM and Sun (Oracle) is presented. In addition to the overview, the main advantage of using multi-core will demonstrated by the experimental results. The focus of the experiment is to study speed-up in the execution of the ‘program’ as the number of the processors (core) increases. For this experiment, open source parallel program to count the primes numbers is considered and simulation are performed on 3 nodes Raspberry cluster . Obtained results show that execution time of the parallel program decreases as number of core increases.
    [Show full text]
  • Understanding and Guiding the Computing Resource Management in a Runtime Stacking Context
    THÈSE PRÉSENTÉE À L’UNIVERSITÉ DE BORDEAUX ÉCOLE DOCTORALE DE MATHÉMATIQUES ET D’INFORMATIQUE par Arthur Loussert POUR OBTENIR LE GRADE DE DOCTEUR SPÉCIALITÉ : INFORMATIQUE Understanding and Guiding the Computing Resource Management in a Runtime Stacking Context Rapportée par : Allen D. Malony, Professor, University of Oregon Jean-François Méhaut, Professeur, Université Grenoble Alpes Date de soutenance : 18 Décembre 2019 Devant la commission d’examen composée de : Raymond Namyst, Professeur, Université de Bordeaux – Directeur de thèse Marc Pérache, Ingénieur-Chercheur, CEA – Co-directeur de thèse Emmanuel Jeannot, Directeur de recherche, Inria Bordeaux Sud-Ouest – Président du jury Edgar Leon, Computer Scientist, Lawrence Livermore National Laboratory – Examinateur Patrick Carribault, Ingénieur-Chercheur, CEA – Examinateur Julien Jaeger, Ingénieur-Chercheur, CEA – Invité 2019 Keywords High-Performance Computing, Parallel Programming, MPI, OpenMP, Runtime Mixing, Runtime Stacking, Resource Allocation, Resource Manage- ment Abstract With the advent of multicore and manycore processors as building blocks of HPC supercomputers, many applications shift from relying solely on a distributed programming model (e.g., MPI) to mixing distributed and shared- memory models (e.g., MPI+OpenMP). This leads to a better exploitation of shared-memory communications and reduces the overall memory footprint. However, this evolution has a large impact on the software stack as applications’ developers do typically mix several programming models to scale over a large number of multicore nodes while coping with their hiearchical depth. One side effect of this programming approach is runtime stacking: mixing multiple models involve various runtime libraries to be alive at the same time. Dealing with different runtime systems may lead to a large number of execution flows that may not efficiently exploit the underlying resources.
    [Show full text]
  • Consolidating High-Integrity, High-Performance, and Cyber-Security Functions on a Manycore Processor
    Consolidating High-Integrity, High-Performance, and Cyber-Security Functions on a Manycore Processor Benoît Dupont de Dinechin Kalray S.A. [email protected] Figure 1: Overview of the MPPA3 processor. ABSTRACT CCS CONCEPTS The requirement of high performance computing at low power can • Computer systems organization → Multicore architectures; be met by the parallel execution of an application on a possibly Heterogeneous (hybrid) systems; System on a chip; Real-time large number of programmable cores. However, the lack of accurate languages. timing properties may prevent parallel execution from being appli- cable to time-critical applications. This problem has been addressed KEYWORDS by suitably designing the architecture, implementation, and pro- manycore processor, cyber-physical system, dependable computing gramming models, of the Kalray MPPA (Multi-Purpose Processor ACM Reference Format: Array) family of single-chip many-core processors. We introduce Benoît Dupont de Dinechin. 2019. Consolidating High-Integrity, High- the third-generation MPPA processor, whose key features are mo- Performance, and Cyber-Security Functions on a Manycore Processor. In tivated by the high-performance and high-integrity functions of The 56th Annual Design Automation Conference 2019 (DAC ’19), June 2– automated vehicles. High-performance computing functions, rep- 6, 2019, Las Vegas, NV, USA. ACM, New York, NY, USA, 4 pages. https: resented by deep learning inference and by computer vision, need //doi.org/10.1145/3316781.3323473 to execute under soft real-time constraints. High-integrity func- tions are developed under model-based design, and must meet hard 1 INTRODUCTION real-time constraints. Finally, the third-generation MPPA processor Cyber-physical systems are characterized by software that interacts integrates a hardware root of trust, and its security architecture with the physical world, often with timing-sensitive safety-critical is able to support a security kernel for implementing the trusted physical sensing and actuation [10].
    [Show full text]
  • Parallel Processing with the MPPA Manycore Processor
    Parallel Processing with the MPPA Manycore Processor Kalray MPPA® Massively Parallel Processor Array Benoît Dupont de Dinechin, CTO 14 Novembre 2018 Outline Presentation Manycore Processors Manycore Programming Symmetric Parallel Models Untimed Dataflow Models Kalray MPPA® Hardware Kalray MPPA® Software Model-Based Programming Deep Learning Inference Conclusions Page 2 ©2018 – Kalray SA All Rights Reserved KALRAY IN A NUTSHELL We design processors 4 ~80 people at the heart of new offices Grenoble, Sophia (France), intelligent systems Silicon Valley (Los Altos, USA), ~70 engineers Yokohama (Japan) A unique technology, Financial and industrial shareholders result of 10 years of development Pengpai Page 3 ©2018 – Kalray SA All Rights Reserved KALRAY: PIONEER OF MANYCORE PROCESSORS #1 Scalable Computing Power #2 Data processing in real time Completion of dozens #3 of critical tasks in parallel #4 Low power consumption #5 Programmable / Open system #6 Security & Safety Page 4 ©2018 – Kalray SA All Rights Reserved OUTSOURCED PRODUCTION (A FABLESS BUSINESS MODEL) PARTNERSHIP WITH THE WORLD LEADER IN PROCESSOR MANUFACTURING Sub-contracted production Signed framework agreement with GUC, subsidiary of TSMC (world top-3 in semiconductor manufacturing) Limited investment No expansion costs Production on the basis of purchase orders Page 5 ©2018 – Kalray SA All Rights Reserved INTELLIGENT DATA CENTER : KEY COMPETITIVE ADVANTAGES First “NVMe-oF all-in-one” certified solution * 8x more powerful than the latest products announced by our competitors**
    [Show full text]
  • A Scalable Manycore Simulator for the Epiphany Architecture
    A scalable manycore simulator for the Epiphany architecture Master’s thesis in Computer science and engineering Ola Jeppsson Department of Computer Science and Engineering Chalmers University of Technology University of Gothenburg Gothenburg, Sweden 2019 Master’s thesis 2019 A scalable manycore simulator for the Epiphany architecture Ola Jeppsson Department of Computer Science and Engineering Chalmers University of Technology University of Gothenburg Gothenburg, Sweden 2019 A scalable manycore simulator for the Epiphany architecture Ola Jeppsson © Ola Jeppsson, 2019. Supervisor: Sally A. McKee, Department of Computer Science and Engineering Examiner: Mary Sheeran, Department of Computer Science and Engineering Master’s Thesis 2019 Department of Computer Science and Engineering Chalmers University of Technology and University of Gothenburg SE-412 96 Gothenburg Telephone +46 31 772 1000 Typeset in LATEX Gothenburg, Sweden 2019 iv A scalable manycore simulator for the Epiphany architecture Ola Jeppsson Department of Computer Science and Engineering Chalmers University of Technology and University of Gothenburg Abstract The core count of manycore processors increases at a rapid pace; chips with hundreds of cores are readily available, and thousands of cores on a single die have been demon- strated. A scalable model is needed to be able to effectively simulate this class of proces- sors. We implement a parallel functional network-on-chip simulator for the Adapteva Epiphany architecture, which we integrate with an existing single-core simulator to cre- ate a manycore model. To verify the implementation, we run a set of example programs from the Epiphany SDK and the Epiphany toolchain test suite against the simulator. We run a parallel matrix multiplication program against the simulator spread across a vary- ing number of networked computing nodes to verify the MPI implementation.
    [Show full text]
  • Power and Energy Characterization of an Open Source 25-Core Manycore Processor
    Power and Energy Characterization of an Open Source 25-core Manycore Processor Michael McKeown, Alexey Lavrov, Mohammad Shahrad, Paul J. Jackson, Yaosheng Fu∗, Jonathan Balkind, Tri M. Nguyen, Katie Lim, Yanqi Zhouy, David Wentzlaff Princeton University fmmckeown,alavrov,mshahrad,pjj,yfu,jbalkind,trin,kml4,yanqiz,[email protected] ∗ Now at NVIDIA y Now at Baidu Abstract—The end of Dennard’s scaling and the looming power wall have made power and energy primary design CB Chip Bridge (CB) PLL goals for modern processors. Further, new applications such Tile 0 Tile 1 Tile 2 Tile 3 Tile 4 as cloud computing and Internet of Things (IoT) continue to necessitate increased performance and energy efficiency. Manycore processors show potential in addressing some of Tile 5 Tile 6 Tile 7 Tile 8 Tile 9 these issues. However, there is little detailed power and energy data on manycore processors. In this work, we carefully study Tile 10 Tile 11 Tile 12 Tile 13 Tile 14 detailed power and energy characteristics of Piton, a 25-core modern open source academic processor, including voltage Tile 15 Tile 16 Tile 17 Tile 18 Tile 19 versus frequency scaling, energy per instruction (EPI), memory system energy, network-on-chip (NoC) energy, thermal charac- Tile 20 Tile 21 Tile 22 Tile 23 Tile 24 teristics, and application performance and power consumption. This is the first detailed power and energy characterization of (a) (b) an open source manycore design implemented in silicon. The Figure 1. Piton die, wirebonds, and package without epoxy encapsulation open source nature of the processor provides increased value, (a) and annotated CAD tool layout screenshot (b).
    [Show full text]
  • MALT Is a Truly Manycore Processor  [email protected] ✆ +7 (495) 133 6248
    https://www.maltsystem.com [email protected] Manycore processors MALT MALT (Manycore Architecture with Lightweight Threads) is a family of energy-efficient processors with hundreds of cores on a single chip. MALT processors family has models with scalar, vector and hybrid architectures. Application field Solutions based on MALT come near to customized FPGA systems in terms Solutions based on MALT may be performed of performance-per-watt, surpassing them by performance-per-dollar as universal processors, nevertheless, they value. MALT architecture programming complexity is on a par with the demonstrate the highest efficiency on the tasks universal x86/GPU/ARM computer systems and they show the perfor- mance-per-watt of a much higher order. they are designed for: Programming Вlockchain and cryptocurrency MALT programming is almost as easy as universal manycore processor pro- MALT-C is a universal processor for complex gramming. Depending on their preferences and requirements to application cryptographic transformations, including blockchain code optimization a programmer can choose one of the approaches: transactions with utmost energy efficiency. n C++ for MALT (under development). It is the easiest way to start n Ethereum smart contract processing working with MALT. The С++17 standard is maintained to program n Delivery of trusted blockchain solutions scalar cores. Thread operations are implemented through the STL con- structions (std:: thread, std:: mutex, etc.). MALT is considered a classic n Stream encryption, data integrity checking manycore processor, therefore it’s easy to port the existing software n Modern cryptocurrency mining and libraries. n OpenCL for MALT (under development). OpenCL standard is Big data being used for operations with scalar and vector MALT cores.
    [Show full text]
  • Enhancing Programmability in Noc-Based Lightweight Manycore Processors with a Portable MPI Library
    Enhancing Programmability in NoC-Based Lightweight Manycore Processors with a Portable MPI Library João Fellipe Uller, João Souto, Pedro Henrique Penna, Márcio Castro, Henrique Freitas, Jean-François Méhaut To cite this version: João Fellipe Uller, João Souto, Pedro Henrique Penna, Márcio Castro, Henrique Freitas, et al.. En- hancing Programmability in NoC-Based Lightweight Manycore Processors with a Portable MPI Li- brary. XXI Simpósio em Sistemas Computacionais de Alto Desempenho, Oct 2020, Online, Brazil. hal-02975094 HAL Id: hal-02975094 https://hal.archives-ouvertes.fr/hal-02975094 Submitted on 22 Oct 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Enhancing Programmability in NoC-Based Lightweight Manycore Processors with a Portable MPI Library Joao˜ Fellipe Uller1, Joao˜ Vicente Souto1, Pedro Henrique Penna2;3, Marcio´ Castro1, Henrique Freitas3, Jean-Franc¸ois Mehaut´ 2 1Laboratorio´ de Pesquisa em Sistemas Distribu´ıdos (LaPeSD) Universidade Federal de Santa Catarina (UFSC) – Brazil 2Computer Architecture and Parallel Processing Team (CArT) Pontif´ıcia Universidade Catolica´ de Minas Gerais (PUC Minas) – Brazil 3Laboratoire d’Informatique de Grenoble (LIG) Universite´ Grenoble Alpes (UGA) – France [email protected], [email protected], [email protected], [email protected] [email protected], [email protected] Abstract.
    [Show full text]
  • Parallel Programming Model for the Epiphany Many-Core Coprocessor Using Threaded MPI James A
    Parallel Programming Model for the Epiphany Many-Core Coprocessor Using Threaded MPI James A. Ross David A. Richie Engility Corporation Brown Deer Technology Aberdeen Proving Ground, MD Forest Hill, MD [email protected] [email protected] Song J. Park Dale R. Shires U.S. Army Research Laboratory U.S. Army Research Laboratory Aberdeen Proving Ground, MD Aberdeen Proving Ground, MD [email protected] [email protected] ABSTRACT The Adapteva Epiphany many-core architecture comprises a 2D 1. INTRODUCTION The emergence of a wide range of parallel processor architectures tiled mesh Network-on-Chip (NoC) of low-power RISC cores continues to present the challenge of identifying an effective with minimal uncore functionality. It offers high computational programming model that provides access to the capabilities of the energy efficiency for both integer and floating point calculations architecture while simultaneously providing the programmer with as well as parallel scalability. Yet despite the interesting familiar, if not standardized, semantics and syntax. The architectural features, a compelling programming model has not programmer is frequently left with the choice of using a non- been presented to date. This paper demonstrates an efficient standard programming model specific to the architecture or a parallel programming model for the Epiphany architecture based standardized programming model that yields poor control and on the Message Passing Interface (MPI) standard. Using MPI performance. exploits the similarities between the Epiphany architecture and a conventional parallel distributed cluster of serial cores. Our The Adapteva Epiphany MIMD architecture is a scalable 2D approach enables MPI codes to execute on the RISC array array of RISC cores with minimal un-core functionality connected processor with little modification and achieve high performance.
    [Show full text]
  • Manycore Platforms
    2 Manycore Platforms Andrea Marongiu1, Vincent Nelis´ 2 and Patrick Meumeu Yomsi2 1Swiss Federal Institute of Technology in Zurich¨ (ETHZ), Switzerland; and University of Bologna, Italy 2CISTER Research Centre, Polytechnic Institute of Porto, Portugal This chapter surveys state-of-the-art manycore platforms. It discusses the historical evolution of computing platforms over the past decades and the technical hurdles that led to the manycore revolution, then presents in details several manycore platforms, outlining (i) the key architectural traits that enable scalability to several tens or hundreds of processing cores and (ii) the shared resources that are responsible for unpredictable timing. 2.1 Introduction Starting from the early 2000s, general-purpose processor manufacturers adopted the chip multiprocessor (CMP) design paradigm [1] to overcome technological “walls.” Single-core processor designs hit the power wall around 2004, when the consolidated strategy of scaling down the gate size of integrated circuits – reducing the supply voltage and increasing the clock frequency – became unfeasible because of excessive power consumption and expensive packaging and cooling solutions [2]. The CMP phisolophy replaces a single, very fast core with multiple cores that cooperate to achieve equivalent performance, but each operating at a lower clock frequency and thus consuming less power. Over the past 20 years, processor performance has increased at a faster rate than the memory performance [3], which created a gap that is commonly referred to as the memory wall. Historically, sophisticated multi-level cache hierarchies have been built to implement main memory access latency hiding techniques. As CMPs use lower clock frequencies, the processor–memory 15 16 Manycore Platforms gap grows at a slower rate, compared to traditional single-core systems.
    [Show full text]