UPC Collective Optimization

Total Page:16

File Type:pdf, Size:1020Kb

UPC Collective Optimization I V N E R U S E I T H Y T O H F G E R D I N B U UPC Collective Optimization Savvas Petrou August 20, 2009 MSc in High Performance Computing The University of Edinburgh Year of Presentation: 2009 Abstract Efficient collective operations are an important feature for new parallel programming languages. The size of massively parallel machines is increasing fast and the perfor- mance of these operations are crucial to the scalability of many applications. The Uni- fied Parallel C (UPC) language is one of the new parallel programming languages based on the Partitioned Global Address Space (PGAS) model. The purpose of this project is to investigate the performance and scaling of the current implementations of collective operations and also develop new implementations based on tree structure algorithms. For the benchmarks two codes for Computational at Fluid Dynamics (CFD) are used. This type of applications extensively use functions to gather and scatter information for various parameters of the simulations, something that makes them ideal for bench- marking collective operations. The benchmarks are executed on the United Kingdom National Supercomputing Service, HECToR. The system is a hybrid supercomputer consisting of a massively parallel scalar system and a vector system. Both systems are going to be tested to observe the behaviour of the algorithms on different architectures. Contents 1 Introduction 1 2 Background 3 2.1 UnifiedParallelC............................. 3 2.1.1 UPCCompilers.......................... 5 2.1.2 Collectivesimplementations . 7 2.2 Thesystem ................................ 8 2.2.1 HECToRCrayXT4........................ 8 2.2.2 HECToRCrayX2 ........................ 9 2.3 Applications................................ 10 2.3.1 BenchC.............................. 10 2.3.2 XFlow .............................. 12 3 Porting 14 3.1 BenchC .................................. 16 3.1.1 Preprocessorflags . .. .. .. .. .. .. 16 3.1.2 Timers .............................. 16 3.1.3 Headerfilesissue.. .. .. .. .. .. .. 18 3.1.4 Broadcastswithinall-reductions . 19 3.2 XFlow................................... 19 3.2.1 Collectiveoperationsfile . 19 3.2.2 Timers .............................. 20 3.2.3 Headerfilesissue.. .. .. .. .. .. .. 22 3.2.4 Broadcastswithinall-reductions . 22 3.3 Tests.................................... 22 4 Optimizations 23 4.1 Broadcast ................................. 23 4.1.1 DefaultCFDcodeimplementation. 24 4.1.2 Implementation1.. .. .. .. .. .. .. 25 4.1.3 Implementation2.. .. .. .. .. .. .. 27 4.1.4 Implementation3.. .. .. .. .. .. .. 28 4.1.5 GASNetimplementation . 29 4.1.6 MTUimplementation . 30 4.2 Reduction................................. 31 i 4.2.1 DefaultCFDcodeimplementation. 31 4.2.2 Implementation1.. .. .. .. .. .. .. 32 4.2.3 Implementation2.. .. .. .. .. .. .. 33 4.2.4 MTUimplementation . 34 5 Benchmarks and Analysis 36 5.1 Inputparameters ............................. 36 5.1.1 BenchC.............................. 36 5.1.2 XFlow .............................. 39 5.2 Benchmarks................................ 41 5.2.1 CrayXT ............................. 41 5.2.2 CrayX2.............................. 51 6 Conclusions and Future Work 60 6.1 Conclusions................................ 60 6.2 Futurework................................ 61 A HECToR modules 64 A.1 CrayXT4 ................................. 64 B Graphs 66 B.1 CrayXT.................................. 66 B.1.1 Broadcast............................. 66 B.1.2 Reduction............................. 70 B.2 CrayX2.................................. 73 B.2.1 Broadcast............................. 73 B.2.2 Reduction............................. 77 C Work Plan 80 ii List of Tables 3.1 Timers used in BenchC depending on programming standard, architec- tureandcompiler ............................. 17 4.1 Quad tree statistics for first broadcast implementation .......... 26 4.2 Quad tree statistics for second broadcast implementation ........ 27 5.1 InputparametersforBenchC . 37 5.2 Percent of time spent in collective operations [BenchC, XT4]...... 38 5.3 Percent of time spent in collective operations [BenchC, X2]....... 38 5.4 InputparametersforXFlow. 39 5.5 Percent of time spent in collective operations [XFlow, XT4] ...... 40 5.6 Percent of time spent in collective operations [XFlow, X2] ....... 40 iii List of Figures 2.1 UPCsharedvariablesexample . 4 2.2 AMDOpteronQuad-corearchitectureschematic . .... 9 2.3 BenchCexecutiondiagram . 11 4.1 Userbroadcastimplementation . 25 4.2 Firstimplementationofbroadcast . .. 26 4.3 Secondimplementationofbroadcast . .. 27 4.4 Thirdimplementationofbroadcast . .. 28 4.5 Binomialtreeoforder4. .. .. .. .. .. .. .. 29 4.6 MTU GET implementationofbroadcast . 30 4.7 Userimplementationofreduction . 32 4.8 Firstimplementationofreduction . .. 33 4.9 Secondimplementationofreduction . .. 34 5.1 Broadcast tree rank benchmarks [BenchC, XT4] . .... 43 5.2 All broadcast implementations comparison [BenchC, XT4]....... 45 5.3 All broadcast implementations comparison [XFlow, XT4] ........ 46 5.4 Reductiontreerankbenchmarks[BenchC,XT4]. .... 48 5.5 All reduction implementations comparison [BenchC, XT4]....... 49 5.6 All reduction implementations comparison [XFlow, XT4] ........ 50 5.7 Broadcast tree rank benchmarks [BenchC, X2] . ... 53 5.8 All broadcast implementations comparison [BenchC, X2] ........ 54 5.9 All broadcast implementations comparison [XFlow, X2] . ....... 55 5.10 Reductiontreerankbenchmarks[BenchC,X2] . .... 57 5.11 All reduction implementations comparison [BenchC, X2]........ 58 5.12 All reduction implementations comparison [XFlow, X2] ........ 59 B.1 All broadcast implementations comparison (tree rank 4) [BenchC, XT4] 66 B.2 All broadcast implementations comparison (tree rank 8) [BenchC, XT4] 67 B.3 Broadcast tree rank benchmarks [XFlow, XT4] . ... 68 B.4 All broadcast implementations comparison (tree rank 4) [XFlow,XT4] . 69 B.5 All broadcast implementations comparison (tree rank 8) [XFlow,XT4] . 69 B.6 All reduction implementations comparison (tree rank 4) [BenchC, XT4] 70 B.7 All reduction implementations comparison (tree rank 16) [BenchC, XT4] 70 B.8 Reductiontreerankbenchmarks[XFlow,XT4] . ... 71 iv B.9 All reduction implementations comparison (tree rank 4) [XFlow,XT4] . 72 B.10 All reduction implementations comparison (tree rank 16)[XFlow,XT4] 72 B.11 All broadcast implementations comparison (tree rank 4) [BenchC, X2] . 73 B.12 All broadcast implementations comparison (tree rank 8) [BenchC, X2] . 74 B.13 Broadcast tree rank benchmarks [XFlow, X2] . .... 75 B.14 All broadcast implementations comparison (tree rank 4)[XFlow,X2] . 76 B.15 All broadcast implementations comparison (tree rank 8)[XFlow,X2] . 76 B.16 All reduction implementations comparison (tree rank 4) [BenchC, X2] . 77 B.17 All reduction implementations comparison (tree rank 8) [BenchC, X2] . 77 B.18 Reductiontreerankbenchmarks[XFlow,X2] . .... 78 B.19 All reduction implementations comparison (tree rank 4)[XFlow,X2]. 79 B.20 All reduction implementations comparison (tree rank 8)[XFlow,X2]. 79 C.1 Ganttdiagram............................... 80 v Acknowledgements First of all, I would like to thank my supervisors, Dr Adam Carter and Jason Beech- Brandt for their continued support and supervision throughout this project. Also, I want to express my appreciation to my colleague and friend Christopher Charles Evi-Parker for his help during the proofreading stage. Last but not least, I thank my parents for making this Masters degree possible. Chapter 1 Introduction In parallel programming there are many factors that need to be considered in order to be able to develop an application that has good parallel characteristics. One of these factors is the efficient communication between the processes for performing collective operations. Depending on the data dependencies and the algorithms used, different op- erations are important for every application. For portability and performance reasons these operations are usually part of the parallel programming language specifications. This creates an abstraction layer and gives the opportunity to developers to write their own implementationsin cases where the default versions have poor performance. More- over the codes are able to execute on a variety of systems and architectures without the need of porting procedures. The Unified Parallel C (UPC) is one of the new parallel programming languages based on the Partitioned Global Address Space (PGAS) model. The main feature of the model is the use of single sided communications for data exchange between processes. This offers great flexibility to the developer and makes it possible to parallelise applications that have dynamic communication patterns. The United Kingdom National Supercomputing Service, known by the name HECToR, has recently installed a UPC compiler for both architectures of the system. HECToR is a Cray XT5h hybrid architecture system. It consists of a Cray XT4 massively parallel scalar system and a Cray X2 vector system. In addition to the installation of the UPC compiler the XT system was upgraded from dual core processors to quad cores. The multi-core architecture introduced new potential optimisations for collective functions. The purpose of the project is to benchmark the currently available implementations of collective operations and develop new versions in an effort to optimise them. For the benchmarks two applications for Computation at Fluid Dynamics (CFD) are going to be used. They are both developed by the same software development group,
Recommended publications
  • Overview of Recent Supercomputers
    Overview of recent supercomputers Aad J. van der Steen HPC Research NCF L.J. Costerstraat 5 6827 AR Arnhem The Netherlands [email protected] www.hpcresearch.nl NCF/HPC Research October 2011 Abstract In this report we give an overview of high-performance computers which are currently available or will become available within a short time frame from vendors; no attempt is made to list all machines that are still in the development phase. The machines are described according to their macro-architectural class. The information about the machines is kept as compact as possible. No attempt is made to quote price information as this is often even more elusive than the performance of a system. In addition, some general information about high-performance computer architectures and the various processors and communication networks employed in these systems is given in order to better appreciate the systems information given in this report. This document reflects the technical momentary state of the supercomputer arena as accurately as possible. However, the author nor NCF take any responsibility for errors or mistakes in this document. We encourage anyone who has comments or remarks on the contents to inform us, so we can improve this report. NCF, the National Computing Facilities Foundation, supports and furthers the advancement of tech- nical and scientific research with and into advanced computing facilities and prepares for the Netherlands national supercomputing policy. Advanced computing facilities are multi-processor vectorcomputers, mas- sively parallel computing systems of various architectures and concepts and advanced networking facilities. Contents 1 Introduction and account 2 2 Architecture of high-performance computers 4 2.1 Themainarchitecturalclasses.
    [Show full text]
  • Workload Management and Application Placement for the Cray Linux Environment™
    TMTM Workload Management and Application Placement for the Cray Linux Environment™ S–2496–4101 © 2010–2012 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray and Sonexion are federally registered trademarks and Active Manager, Cascade, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray XE, Cray XEm, Cray XE5, Cray XE5m, Cray XE6, Cray XE6m, Cray XK6, Cray XK6m, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, LibSci, NodeKARE, RapidArray, The Way to Better Science, Threadstorm, uRiKA, UNICOS/lc, and YarcData are trademarks of Cray Inc.
    [Show full text]
  • Cray XT and Cray XE Y Y System Overview
    Crayyy XT and Cray XE System Overview Customer Documentation and Training Overview Topics • System Overview – Cabinets, Chassis, and Blades – Compute and Service Nodes – Components of a Node Opteron Processor SeaStar ASIC • Portals API Design Gemini ASIC • System Networks • Interconnection Topologies 10/18/2010 Cray Private 2 Cray XT System 10/18/2010 Cray Private 3 System Overview Y Z GigE X 10 GigE GigE SMW Fibre Channels RAID Subsystem Compute node Login node Network node Boot /Syslog/Database nodes 10/18/2010 Cray Private I/O and Metadata nodes 4 Cabinet – The cabinet contains three chassis, a blower for cooling, a power distribution unit (PDU), a control system (CRMS), and the compute and service blades (modules) – All components of the system are air cooled A blower in the bottom of the cabinet cools the blades within the cabinet • Other rack-mounted devices within the cabinet have their own internal fans for cooling – The PDU is located behind the blower in the back of the cabinet 10/18/2010 Cray Private 5 Liquid Cooled Cabinets Heat exchanger Heat exchanger (XT5-HE LC only) (LC cabinets only) 48Vdc flexible Cage 2 buses Cage 2 Cage 1 Cage 1 Cage VRMs Cage 0 Cage 0 backplane assembly Cage ID controller Interconnect 01234567 Heat exchanger network cable Cage inlet (LC cabinets only) connection air temp sensor Airflow Heat exchanger (slot 3 rail) conditioner 48Vdc shelf 3 (XT5-HE LC only) 48Vdc shelf 2 L1 controller 48Vdc shelf 1 Blower speed controller (VFD) Blooewer PDU line filter XDP temperature XDP interface & humidity sensor
    [Show full text]
  • Parallelization Hardware Architecture Type to Enter Text
    Parallelization Hardware architecture Type to enter text Delft University of Technology Challenge the future Contents • Introduction • Classification of systems • Topology • Clusters and Grid • Fun Hardware 2 hybrid parallel vector superscalar scalar 3 Why Parallel Computing Primary reasons: • Save time • Solve larger problems • Provide concurrency (do multiple things at the same time) Classification of HPC hardware • Architecture • Memory organization 5 1st Classification: Architecture • There are several different methods used to classify computers • No single taxonomy fits all designs • Flynn's taxonomy uses the relationship of program instructions to program data • SISD - Single Instruction, Single Data Stream • SIMD - Single Instruction, Multiple Data Stream • MISD - Multiple Instruction, Single Data Stream • MIMD - Multiple Instruction, Multiple Data Stream 6 Flynn’s Taxonomy • SISD: single instruction and single data stream: uniprocessor • SIMD: vector architectures: lower flexibility • MISD: no commercial multiprocessor: imagine data going through a pipeline of execution engines • MIMD: most multiprocessors today: easy to construct with off-the-shelf computers, most flexibility 7 SISD • One instruction stream • One data stream • One instruction issued on each clock cycle • One instruction executed on single element(s) of data (scalar) at a time • Traditional ‘von Neumann’ architecture (remember from introduction) 8 SIMD • Also von Neumann architectures but more powerful instructions • Each instruction may operate on more than one data element • Usually intermediate host executes program logic and broadcasts instructions to other processors • Synchronous (lockstep) • Rating how fast these machines can issue instructions is not a good measure of their performance • Two major types: • Vector SIMD • Parallel SIMD 9 Vector SIMD • Single instruction results in multiple operands being updated • Scalar processing operates on single data elements.
    [Show full text]
  • The Gemini Network
    The Gemini Network Rev 1.1 Cray Inc. © 2010 Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced or disclosed in any form. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, Cray, Cray Channels, Cray Y-MP, UNICOS and UNICOS/mk are federally registered trademarks and Active Manager, CCI, CCMT, CF77, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ada, Cray Animation Theater, Cray APP, Cray Apprentice2, Cray C90, Cray C90D, Cray C++ Compiling System, Cray CF90, Cray EL, Cray Fortran Compiler, Cray J90, Cray J90se, Cray J916, Cray J932, Cray MTA, Cray MTA-2, Cray MTX, Cray NQS, Cray Research, Cray SeaStar, Cray SeaStar2, Cray SeaStar2+, Cray SHMEM, Cray S-MP, Cray SSD-T90, Cray SuperCluster, Cray SV1, Cray SV1ex, Cray SX-5, Cray SX-6, Cray T90, Cray T916, Cray T932, Cray T3D, Cray T3D MC, Cray T3D MCA, Cray T3D SC, Cray T3E, Cray Threadstorm, Cray UNICOS, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray X-MP, Cray XMS, Cray XMT, Cray XR1, Cray XT, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray Y-MP EL, Cray-1, Cray-2, Cray-3, CrayDoc, CrayLink, Cray-MP, CrayPacs, CrayPat, CrayPort, Cray/REELlibrarian, CraySoft, CrayTutor, CRInform, CRI/TurboKiva, CSIM, CVT, Delivering the power…, Dgauss, Docview, EMDS, GigaRing, HEXAR, HSX, IOS, ISP/Superlink, LibSci, MPP Apprentice, ND Series Network Disk Array, Network Queuing Environment, Network Queuing Tools, OLNET, RapidArray, RQS, SEGLDR, SMARTE, SSD, SUPERLINK, System Maintenance and Remote Testing Environment, Trusted UNICOS, TurboKiva, UNICOS MAX, UNICOS/lc, and UNICOS/mp are trademarks of Cray Inc.
    [Show full text]
  • Pubtex Output 2011.12.12:1229
    TM Using Cray Performance Analysis Tools S–2376–53 © 2006–2011 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray, LibSci, and PathScale are federally registered trademarks and Active Manager, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray XE, Cray XEm, Cray XE5, Cray XE5m, Cray XE6, Cray XE6m, Cray XK6, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, Gemini, Libsci, NodeKARE, RapidArray, SeaStar, SeaStar2, SeaStar2+, Sonexion, The Way to Better Science, Threadstorm, uRiKA, and UNICOS/lc are trademarks of Cray Inc.
    [Show full text]
  • Research J Inc
    c: RESEARCH J INC. CRAY® COMPUTER SYSTEMS eFT77 REFERENCE MANUAL S,R-0018 r<-w ~ Gopyright© 1986, 1988 by Gray Research, Inc. This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Gray Research, Inc. RECORD OF REVISION RESEARCH. INC. PUBLICATION NUMBER SR-0018 Eadchhtime this manu~1 is r~vised and reprinted. all changes issued against the previous version are incorporated into the new version an t e new verSion IS assigned an alphabetic level. ~very page chan~ed by a reprint. with revisio~ has the revision level in the lower right hand corner. Changes to part of a page are noted y ~ changl!. bar In the margin directly o~poslte the c~a~ge. A change bar in the margin opposite the page number indicates that the entire page IS new. If the manual IS rewritten. the revIsion level changes but the manual does not contain change bars. Reql:'est.s for copies of C~ay Research. Inc. publications should be directed to the Distribution Center and comments about these publications should be directed to: CRAY RESEARCH. INC. 1345 Northland Drive Mendota Heights. Minnesota 55120 Revision Description April 1986 - Original printing. A September 1986 - Changes are the SUPPRESS directive and the TARGET command. Sections on input/output have been reorganized, with a new introduction in section 7. Other editorial changes have been made. Trademarks are now documented in the record of revision. The previous version is obsolete. B February 1988 - This reprint with revision adds the INCLUDE statement, Loopmark feature, BL and NOBL directives, ALLOC directive, INTEGER directive, I/INDEF option, -v option (UNICOS only), EDN keyword (COS only), and P and w options (CRAY-2 systems only).
    [Show full text]
  • Performance Evaluation of the Cray X1 Distributed Shared Memory Architecture
    Performance Evaluation of the Cray X1 Distributed Shared Memory Architecture Tom Dunigan Jeffrey Vetter Pat Worley Oak Ridge National Laboratory Highlights º Motivation – Current application requirements exceed contemporary computing capabilities – Cray X1 offered a ‘new’ system balance º Cray X1 Architecture Overview – Nodes architecture – Distributed shared memory interconnect – Programmer’s view º Performance Evaluation – Microbenchmarks pinpoint differences across architectures – Several applications show marked improvement ORNL/JV 2 1 ORNL is Focused on Diverse, Grand Challenge Scientific Applications SciDAC Genomes Astrophysics to Life Nanophase Materials SciDAC Climate Application characteristics vary dramatically! SciDAC Fusion SciDAC Chemistry ORNL/JV 3 Climate Case Study: CCSM Simulation Resource Projections Science drivers: regional detail / comprehensive model Machine and Data Requirements 1000 750 340.1 250 154 100 113.3 70.3 51.5 Tflops 31.9 23.4 Tbytes 14.5 10 10.6 6.6 4.8 3 2.2 1 1 dyn veg interactivestrat chem biogeochem eddy resolvcloud resolv trop chemistry Years CCSM Coupled Model Resolution Configurations: 2002/2003 2008/2009 Atmosphere 230kmL26 30kmL96 • Blue line represents total national resource dedicated Land 50km 5km to CCSM simulations and expected future growth to Ocean 100kmL40 10kmL80 meet demands of increased model complexity Sea Ice 100km 10km • Red line show s data volume generated for each Model years/day 8 8 National Resource 3 750 century simulated (dedicated TF) Storage (TB/century) 1 250 At 2002-3 scientific complexity, a century simulation required 12.5 days. ORNL/JV 4 2 Engaged in Technical Assessment of Diverse Architectures for our Applications º Cray X1 Cray X1 º IBM SP3, p655, p690 º Intel Itanium, Xeon º SGI Altix º IBM POWER5 º FPGAs IBM Federation º Planned assessments – Cray X1e – Cray X2 – Cray Red Storm – IBM BlueGene/L – Optical processors – Processors-in-memory – Multithreading – Array processors, etc.
    [Show full text]
  • Scientific and Technological Problems in the Field of Architectural Solutions for Supercomputers
    Computer and Information Science; Vol. 13, No. 3; 2020 ISSN 1913-8989 E-ISSN 1913-8997 Published by Canadian Center of Science and Education Main Scientific and Technological Problems in the Field of Architectural Solutions for Supercomputers Andrey Molyakov1 1 Institute of Information Technologies and Cybersecurity, Russian State University for the Humanities, Moscow, Russia Correspondence: Andrey Molyakov, Institute of Information Technologies and Cybersecurity, Russian State University for the Humanities, Moscow, 117534, Kirovogradskaya street, 25/2, Russia. Received: June 14, 2020 Accepted: July 17, 2020 Online Published: July 24, 2020 doi:10.5539/cis.v13n3p89 URL: https://doi.org/10.5539/cis.v13n3p89 Abstract In this paper author describes creation of a domestic accelerator processor capable of replacing NVIDIA GPGPU graphics processors for solving scientific and technical problems and other tasks requiring high performance, but which are characterized by good or medium localization of the processed data. Moreover, this paper illustrates creation of a domestic processor or processors for solving the problems of creating information systems for processing big data, as well as tasks of artificial intelligence (deep learning, graph processing and others). Therefore, these processors are characterized by intensive irregular work with memory (poor and extremely poor localization of data), while requiring high energy efficiency. The author points out the need for a systematic approach, training of young specialists on supporting innovative research, improving and developing technological approaches, as shown in paper (Adamov et al., 2019). Keywords: graphic processors, signal processing processors, multi-tile processors, quantum cellular automata, superconductor electronics 1. Introduction The revolutionary development, taking into account the global worldwide processes, is the transition from CMOS technologies to Post-Moore’s technologies.
    [Show full text]
  • Pubtex Output 2007.09.07:1218
    Cray® Bioinformatics Library 3.0 Release Overview and Installation Guide S–2424–30 © 2004, 2005, 2007 Cray Inc. All Rights Reserved. This manual or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray, LibSci, UNICOS and UNICOS/mk are federally registered trademarks and Active Manager, Cray Apprentice2, Cray C++ Compiling System, Cray Fortran Compiler, Cray SeaStar, Cray SeaStar2, Cray SHMEM, Cray Threadstorm, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray XMT, Cray XT, Cray XT3, Cray XT4, CrayDoc, CRInform, Libsci, RapidArray, UNICOS/lc, and UNICOS/mp are trademarks of Cray Inc. Apache is a trademark of The Apache Software Foundation. GNU is a trademark of The Free Software Foundation. Linux is a trademark of Linus Torvalds. Mac OS is a trademark of Apple Computer, Inc.
    [Show full text]
  • Energy Efficiency Aspects in Cray Supercomputers
    1 Exaflop System Not constant cost, size or power Copyright 2010 Cray Inc. 2 1 GF – 1988: Cray Y-MP; 8 Processors • Static finite element analysis 1 TF – 1998: Cray T3E; 1,024 Processors • Modeling of metallic magnet atoms 1 PF – 2008: Cray XT5; 150,000 Processors • Superconductive materials 1 EF – ~2018: ~10,000,000 Processors Cray Inc 3 Change Technologies Parallelism Scalar to Vectors Dependency analysis, vectorizing compilers 1 to 10s •1 Static GF – 1988:finite element Cray Y-MP; analysis 8 Processors Change Technologies Parallelism SMP Multitasking, Microtasking, OpenMP 10s to 100s •ChangeModeling of metallic magnetTechnologies atoms Parallelism SMP to MPP Distributed memory programming, PVM, MPI 100s to millions 1 TF – 1998: Cray T3E; 1,024 Processors 1 PF – 2008: Cray XT5; 150,000 Processors • Superconductive materials Change Technologies Parallelism MPP to ?? Accelerators, Manycore, Chapel, X10 Millions to Billions? 1 EF – ~2018: ~10,000,000 Processors Cray Inc 4 Concurrency Power • A billion operations per • Traditional voltage clock scaling is over • Billions of refs in flight • Power now a major at all times design constraint • Will require huge • Cost of ownership problems • Driving significant • Need to exploit all changes in architecture available parallelism Programming Difficulty Resiliency • Concurrency and new • Many more components micro-architectures will • Components getting less significantly complicate reliable software • Checkpoint bandwidth • Need to hide this not scaling complexity from the users Copyright 2010 Cray Inc. 5 . The most power-efficient standard processors today can achieve ~400 MF/watt on HPL . This corresponds to ~2.5 MW per Petaflop . Or about 2.5 GW for an Exaflop! .
    [Show full text]
  • Cray Linux Environment™ (CLE) Software Release Overview Supplement
    R Cray Linux Environment™ (CLE) Software Release Overview Supplement S–2497–5003 © 2010–2013 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray and Sonexion are federally registered trademarks and Active Manager, Cascade, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CS300, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XC30, Cray XD1, Cray XE, Cray XEm, Cray XE5, Cray XE5m, Cray XE6, Cray XE6m, Cray XK6, Cray XK6m, Cray XK7, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, LibSci, NodeKARE, RapidArray, The Way to Better Science, Threadstorm, uRiKA, UNICOS/lc, and YarcData are trademarks of Cray Inc.
    [Show full text]