Cray Centre of Excellence for Hector: Activities and Future Projects

Total Page:16

File Type:pdf, Size:1020Kb

Cray Centre of Excellence for Hector: Activities and Future Projects HECToR User Group Meeting 2010 12 Oct 2010 Manchester Phase 2b – Cray XT6 Current and Past Projects SBLI LUDWIG HiGEM GADGET HELIUM Future Projects Cray XE6 GEMINI results New Cray Centre of Excellence for HECToR website! Cray Exascale Research Initiative Europe Training and Conferences HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 2 • Cray XT4 Compute Nodes • 3072 x 2.3 GHz Quad Core Opteron Nodes • 12,238 Cores • 8GB per node – 2GB per Core • Cray X2 Compute Nodes • 28 x 4 Cray X2 Compute nodes • 32 GB per node 8GB per Core • Cray XT6 Compute Nodes • 1856 x 2.1 GHz Dual 12 Core Opteron Nodes • 44,544 Cores •32GB per node – 1.33 GB per Core HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 3 Greyhound Greyhound Greyhound Greyhound DDR3 Channel HT3 DDR3 Channel 6MB L3 Greyhound 6MB L3 Greyhound Cache Greyhound Cache Greyhound Greyhound Greyhound HT3 DDR3 Channel Greyhound HT3 Greyhound DDR3 Channel Greyhound HT Greyhound Greyhound 3 Greyhound DDR3 Channel DDR3 Channel 6MB L3 Greyhound 6MB L3 Greyhound Cache Greyhound Cache Greyhound Greyhound Greyhound HT3 DDR3 Channel Greyhound Greyhound DDR3 Channel To Interconnect HT1 / HT3 2 Multi-Chip Modules, 4 Opteron Dies 8 Channels of DDR3 Bandwidth to 8 DIMMs 24 (or 16) Computational Cores, 24 MB of L3 cache Dies are fully connected with HT3 Snoop Filter Feature Allows 4 Die SMP to scale well HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 4 6.4 GB/sec direct connect Characteristics HyperTransport Number of Cores 24 Peak Performance 202Gflops/sec MC-12 (2.1) Memory Size 32GB per node Memory 83.5 GB/sec Bandwidth 83.5 GB/sec direct connect memory Cray SeaStar2+ Interconnect HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 5 HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 6 •User reported large runs on jaguarpf were dying with MPI buffer overflows •Preposting of MPI receives, and appropriate MPI environment variable settings, alleviates this issue, and provides much better scalability SBLI Performance on jaguarpf - UU dataset 18000 16000 14000 12000 10000 points * iterations/sec) - 8000 Optimized Code Unoptimized Code 6000 4000 Performance(Mgrid 2000 0 0 50000 100000 150000 200000 250000 Cores HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 7 Lattice-Boltzmann code by Dr Kevin Stratford (EPCC) and Prof. Mike Cates (University of Edinburgh – Dept of Physics) Scaling very well up to 216000 cores Smaller global lattice than ran previously on barcelona jaguarpf – really pushing the strong scaling Ludwig Strong Scaling on jaguarpf 250000 200000 150000 Reduced halo 100000 Parallel SpeedupParallel perfect 50000 0 0 50000 100000 150000 200000 250000 Cores HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 8 Dr Len Shaffrey, University of Reading High resolution coupled climate model The CoE is collaborating with NCAS researchers to refine UM I/O performance I/O servers now overlap communication and computation. First stages complete Currently applied to a single version of the Unified Model Requires a moderate effort to back port code to earlier versions of the UM Large numbers of different versions of the UM being used on HECToR I/O server technique has been employed in other applications as well – HELIUM, CARP HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 9 Dr Adrian Jenkins, Prof. Richard Bower, Dr Tom Theuns - Durham University Dr Robert Crain – Swinburne University GADGET3 scaling Cray XT6 and Cray XE6 The CoE is collaborating with Institute 5000000 for Computational Cosmology (ICC) 4500000 researchers to better understand and 4000000 improve the scaling of GADGET3. 3500000 Code exhibits significant load imbalance 3000000 with current datasets at small core 2500000 Cray XT6 MPI counts 2000000 Cray XT6 MPI/pthreads 50% of the time is spent waiting at (time(s)*cores)Cost 1500000 Cray XE6 MPI MPI collectives on 64 cores 1000000 Cray XE6 MPI/pthreads CoE is looking to improve the load 500000 balancing in the code 0 Performance characterization of 0 1000 2000 Cores new hybrid MPI/pthreads code HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 10 Collaboration with Prof. Ken Taylor, Queen’s University Belfast CoE work involved implementation of an I/O server Work reported at the Cray User Group in Edinburgh – May 2010 “Optimising Application Performance using I/O Servers on Cray XT Technology” HELIUM I/O Performance 200 150 100 I/O servers no I/O servers Time (s) Time 50 0 100 1000 10000 cores Use of I/O server allows for full overlap of I/O with computation and communication I/O time hidden as I/O requests are offloaded to dedicated I/O servers HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 11 Both of the 800 nm runs were successful. Both required 16,110 cores - near the maximum available on HECToR. The results confirmed present understanding of the high-energy cutoff law. The results were consistent with results at 390nm, 248nm, and 195nm. Calculations at higher intensities are not really feasible on 16,110 cores of HECToR, and remain very much a grand challenge. HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 12 HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 13 Supports 2 Nodes per ASIC 168 GB/sec routing capacity Scales to over 100,000 network endpoints Link Level Reliability and Adaptive Hyper Hyper Transport 3 Transport 3 Routing NIC 1 Advanced Resiliency Features NIC 0 Provides global address SB Netlink space Block Gemini LO Advanced NIC designed to Processor efficiently support 48-Port MPI YARC Router One-sided MPI Shmem UPC, Coarray FORTRAN HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 14 Cray XE6 Node Characteristics Number of 16 or 24 Cores MC 10 12X Gemini Channels Peak 210 Gflops (Each Gemini acts High Radix Performance like two nodes on YARC Router the 3D Torus) with adaptive Routing Memory Size 32 or 64 GB 168 GB/sec capacity Memory 85 GB/s Bandwidth HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 15 Module with SeaStar Z Y X Module with Gemini HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 16 Like SeaStar, Gemini has a DMA offload engine allowing large transfers to proceed asynchronously Gemini provides low-overhead OS-bypass features for short transfers MPI latency around 1.4us in current release NIC provides for many millions of MPI messages per second – 20 times better than Seastar Much improved one-sided communication mechanism with hardware support AMOs provide a faster synchronization method for barriers Gemini supports adaptive routing, which Reduces problems with network hot spots Allows MPI to survive link failures Much improved injection bandwidth – 6 GB/s user data injection with HT3 link HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 17 Globally addressable memory provides efficient support for UPC, Co-array FORTRAN, Shmem and Global Arrays Cray compiler targets this capability directly Pipelined global loads and stores Allows for fast irregular communication patterns Atomic memory operations Provides fast synchronization needed for one-sided communication models HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 18 SBLI T1 dataset Cray XE6 and Cray XT6 120 100 80 60 Cray XT6 Cray XE6 40 (Mgrid points(Mgrid iterations/sec) * 20 0 0 500 1000 1500 2000 2500 cores HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 19 HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 20 http://www.hector.ac.uk/coe/ HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 21 Launched December 2009 Initial research partners are EPCC and CSCS Cray increased Edinburgh based team with Alistair Hart and Harvey Richardson Exploring how real applications can exploit future Exascale architectures, specifically: Programming models for PGAS languages Programming GPU accelerators Improved algorithms for FFTs Network and I/O profiling Strong interactions with Cray R&D HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 22 CUG 2010 24-27 May Using I/O servers to improve performance on Cray XT technology– Tom Edwards and Kevin Roy NAIS HPC Workshop 23-24 June PGAS Programming with UPC and Fortran coarrays – Jason Beech-Brandt and Harvey Richardson DEISA Training Course 14-16 September Performance Optimization on Cray XT Systems Coarray Fortran Tutorial Hybrid Manycore Programming and Accelerators in High Performance Computing 22 October Challenges in Programming Multicore and Accelerated Nodes – John Levesque and Alistair Hart SC10 Poster - Performance Tuning of Scientific Applications on HPC Systems Poster – GPU Acceleration of Scientific Applications Tutorial – Introduction to Coarray Fortran EPCC MSc project supervision Global Arrays and NWChem Tuning on the Cray XT – Jason Beech-Brandt Co-arrays for sparse matrix methods – Tom Edwards Visits to NAG and Daresbury and various user sites (Reading, Southampton, Durham, Cambridge amongst others)HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 23 Questions? Jason Beech-Brandt: [email protected] Tom Edwards: [email protected] Kevin Roy: [email protected] HECToR User Group Meeting, 12 Oct, Manchester 14/10/2010 24.
Recommended publications
  • Workload Management and Application Placement for the Cray Linux Environment™
    TMTM Workload Management and Application Placement for the Cray Linux Environment™ S–2496–4101 © 2010–2012 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray and Sonexion are federally registered trademarks and Active Manager, Cascade, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray XE, Cray XEm, Cray XE5, Cray XE5m, Cray XE6, Cray XE6m, Cray XK6, Cray XK6m, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, LibSci, NodeKARE, RapidArray, The Way to Better Science, Threadstorm, uRiKA, UNICOS/lc, and YarcData are trademarks of Cray Inc.
    [Show full text]
  • Cray XT and Cray XE Y Y System Overview
    Crayyy XT and Cray XE System Overview Customer Documentation and Training Overview Topics • System Overview – Cabinets, Chassis, and Blades – Compute and Service Nodes – Components of a Node Opteron Processor SeaStar ASIC • Portals API Design Gemini ASIC • System Networks • Interconnection Topologies 10/18/2010 Cray Private 2 Cray XT System 10/18/2010 Cray Private 3 System Overview Y Z GigE X 10 GigE GigE SMW Fibre Channels RAID Subsystem Compute node Login node Network node Boot /Syslog/Database nodes 10/18/2010 Cray Private I/O and Metadata nodes 4 Cabinet – The cabinet contains three chassis, a blower for cooling, a power distribution unit (PDU), a control system (CRMS), and the compute and service blades (modules) – All components of the system are air cooled A blower in the bottom of the cabinet cools the blades within the cabinet • Other rack-mounted devices within the cabinet have their own internal fans for cooling – The PDU is located behind the blower in the back of the cabinet 10/18/2010 Cray Private 5 Liquid Cooled Cabinets Heat exchanger Heat exchanger (XT5-HE LC only) (LC cabinets only) 48Vdc flexible Cage 2 buses Cage 2 Cage 1 Cage 1 Cage VRMs Cage 0 Cage 0 backplane assembly Cage ID controller Interconnect 01234567 Heat exchanger network cable Cage inlet (LC cabinets only) connection air temp sensor Airflow Heat exchanger (slot 3 rail) conditioner 48Vdc shelf 3 (XT5-HE LC only) 48Vdc shelf 2 L1 controller 48Vdc shelf 1 Blower speed controller (VFD) Blooewer PDU line filter XDP temperature XDP interface & humidity sensor
    [Show full text]
  • The Gemini Network
    The Gemini Network Rev 1.1 Cray Inc. © 2010 Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced or disclosed in any form. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, Cray, Cray Channels, Cray Y-MP, UNICOS and UNICOS/mk are federally registered trademarks and Active Manager, CCI, CCMT, CF77, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ada, Cray Animation Theater, Cray APP, Cray Apprentice2, Cray C90, Cray C90D, Cray C++ Compiling System, Cray CF90, Cray EL, Cray Fortran Compiler, Cray J90, Cray J90se, Cray J916, Cray J932, Cray MTA, Cray MTA-2, Cray MTX, Cray NQS, Cray Research, Cray SeaStar, Cray SeaStar2, Cray SeaStar2+, Cray SHMEM, Cray S-MP, Cray SSD-T90, Cray SuperCluster, Cray SV1, Cray SV1ex, Cray SX-5, Cray SX-6, Cray T90, Cray T916, Cray T932, Cray T3D, Cray T3D MC, Cray T3D MCA, Cray T3D SC, Cray T3E, Cray Threadstorm, Cray UNICOS, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray X-MP, Cray XMS, Cray XMT, Cray XR1, Cray XT, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray Y-MP EL, Cray-1, Cray-2, Cray-3, CrayDoc, CrayLink, Cray-MP, CrayPacs, CrayPat, CrayPort, Cray/REELlibrarian, CraySoft, CrayTutor, CRInform, CRI/TurboKiva, CSIM, CVT, Delivering the power…, Dgauss, Docview, EMDS, GigaRing, HEXAR, HSX, IOS, ISP/Superlink, LibSci, MPP Apprentice, ND Series Network Disk Array, Network Queuing Environment, Network Queuing Tools, OLNET, RapidArray, RQS, SEGLDR, SMARTE, SSD, SUPERLINK, System Maintenance and Remote Testing Environment, Trusted UNICOS, TurboKiva, UNICOS MAX, UNICOS/lc, and UNICOS/mp are trademarks of Cray Inc.
    [Show full text]
  • Cray HPCS Response 10/17/2013
    Cray HPCS Response 10/17/2013 Cray Response to EEHPC Vendor Forum Slides presented on 12 September, 2013 Steven J. Martin Cray Inc. 1 10/17/2013 Copyright 2013 Cray Inc. Safe Harbor Statement This presentation may contain forward-looking statements that are based on our current expectations. Forward looking statements may include statements about our financial guidance and expected operating results, our opportunities and future potential, our product development and new product introduction plans, our ability to expand and penetrate our addressable markets and other statements that are not historical facts. These statements are only predictions and actual results may materially vary from those projected. Please refer to Cray's documents filed with the SEC from time to time concerning factors that could affect the Company and these forward-looking statements. 2 10/17/2013 Copyright 2013 Cray Inc. Legal Disclaimer Information in this document is provided in connection with Cray Inc. products. No license, express or implied, to any intellectual property rights is granted by this document. Cray Inc. may make changes to specifications and product descriptions at any time, without notice. All products, dates and figures specified are preliminary based on current expectations, and are subject to change without notice. Cray hardware and software products may contain design defects or errors known as errata, which may cause the product to deviate from published specifications. Current characterized errata are available on request. Cray uses codenames internally to identify products that are in development and not yet publically announced for release. Customers and other third parties are not authorized by Cray Inc.
    [Show full text]
  • CUG Program-3Press.Pdf
    Welcome Dear Friends, On behalf of the staff and faculty of the Arctic Region Supercomputing Center (ARSC) at the University of Alaska Fairbanks, we welcome you to CUG 2011 in Fairbanks, Alaska. This is the second time that ARSC has hosted CUG and we hope that this experience will be even better than it was sixteen years ago, in the Fall of 1995. Our theme, Golden Nuggets of Discovery, reflects the gold rush history of the far north as well as the many valuable discoveries made possible through High Performance Computing on Cray systems. ARSC’s rich history with Cray began with a CRAY Y-MP in 1993. Our continuous relationship has been sustained with leading installations of the T3D, T3E, SX6, X1 and XE6 architectures. Throughout, we have enjoyed the benefits of CUG membership that we have passed on to our users. Serving the CUG organization itself has brought its own rewards to many ARSC individuals. We hope that this week will bring each of you new nuggets of discovery whether through new insights for supporting users of Cray systems or new ideas for performing your own research on those systems. This week is a great time to start dialogs and collaborations towards the goal of learning from each other. We welcome you to Interior Alaska and hope that you will find time while you are here to enjoy some of its many charms. ARSC is very pleased you have come to the Golden Heart City. Sincerely, Image courtesy of ARSC Image courtesy of ARSC Image courtesy of Dr.
    [Show full text]
  • Breakthrough Science Via Extreme Scalability
    Breakthrough Science via Extreme Scalability Greg Clifford Segment Manager, Cray Inc. [email protected] • Cray’s focus • The requirement for highly scalable systems • Cray XE6 technology • The path to Exascale computing 10/6/2010 Oklahoma Supercomputing Symposium 2010 2 • 35 year legacy focused on building the worlds fastest computer. • 850 employees world wide • Growing in a tough economy • Cray XT6 first computer to deliver a PetaFLOP/s in a production environment (Jaguar system at Oakridge) • A full range of products • From the Cray CX1 to the Cray XE6 • Options includes: unsurpassed scalability, GPUs, SMP to 128 cores & 2 Tbytes, AMD and Intel, InfiniBand and Gemini, high performance IO, … 10/6/2010 Oklahoma Supercomputing Symposium 2010 3 Designed for “mission critical” HPC environments: “when you can not afford to be wrong” Sustainable performance on production applications Reliability Complete HPC environment. Focus on productivity Cray Linux Environment, Compilers, libraries, etc Partner with industry leaders (e.g. PGI, Platform, etc) Compatible with Open Source World Unsurpassed scalability/performance (compute, I/O and software) Proprietary system interconnect (Cray Gemini router) Performance on “grand challenge” applications 10/6/2010 Oklahoma Supercomputing Symposium 2010 4 Award(U. Tennessee/ORNL) Sep, 2007 Cray XT3: 7K cores, 40 TF Jun, 2008 Cray XT4: 18K cores,166 TF Aug 18, 2008 Cray XT5: 65K cores, 600 TF Feb 2, 2009 Cray XT5+: ~100K cores, 1 PF Oct, 2009 Kraken and Krakettes! NICS is specializing on true capability
    [Show full text]
  • High Performance Datacenter Networks Architectures, Algorithms, and Opportunities Synthesis Lectures on Computer Architecture
    High Performance Datacenter Networks Architectures, Algorithms, and Opportunities Synthesis Lectures on Computer Architecture Editor Mark D. Hill, University of Wisconsin Synthesis Lectures on Computer Architecture publishes 50- to 100-page publications on topics pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals. The scope will largely follow the purview of premier computer architecture conferences, such as ISCA, HPCA, MICRO, and ASPLOS. High Performance Datacenter Networks: Architectures, Algorithms, and Opportunities Dennis Abts and John Kim 2011 Quantum Computing for Architects, Second Edition Tzvetan Metodi, Fred Chong, and Arvin Faruque 2011 Processor Microarchitecture: An Implementation Perspective Antonio González, Fernando Latorre, and Grigorios Magklis 2010 Transactional Memory, 2nd edition Tim Harris, James Larus, and Ravi Rajwar 2010 Computer Architecture Performance Evaluation Methods Lieven Eeckhout 2010 Introduction to Reconfigurable Supercomputing Marco Lanzagorta, Stephen Bique, and Robert Rosenberg 2009 On-Chip Networks Natalie Enright Jerger and Li-Shiuan Peh 2009 iii The Memory System: You Can’t Avoid It, You Can’t Ignore It, You Can’t Fake It Bruce Jacob 2009 Fault Tolerant Computer Architecture Daniel J. Sorin 2009 The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines free access Luiz André Barroso and Urs Hölzle 2009 Computer Architecture Techniques for Power-Efficiency Stefanos Kaxiras and Margaret Martonosi 2008 Chip Multiprocessor Architecture: Techniques to Improve Throughput and Latency Kunle Olukotun, Lance Hammond, and James Laudon 2007 Transactional Memory James R. Larus and Ravi Rajwar 2006 Quantum Computing for Computer Architects Tzvetan S. Metodi and Frederic T.
    [Show full text]
  • Cray Linux Environment™ (CLE) 4.0 Software Release Overview Supplement
    TM Cray Linux Environment™ (CLE) 4.0 Software Release Overview Supplement S–2497–4002 © 2010, 2011 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray, LibSci, and PathScale are federally registered trademarks and Active Manager, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray XE, Cray XEm, Cray XE5, Cray XE5m, Cray XE6, Cray XE6m, Cray XK6, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, Gemini, Libsci, NodeKARE, RapidArray, SeaStar, SeaStar2, SeaStar2+, Sonexion, The Way to Better Science, Threadstorm, uRiKA, and UNICOS/lc are trademarks of Cray Inc.
    [Show full text]
  • Cray the Supercomputer Company
    April 6, 1972 1993 2004 Cray Research, Inc. (CRI) opens in Chippewa Falls, WI 1986 Signs first medical research customer Forms Cray Research Superservers Acquires OctigaBay Systems Corp. (National Cancer Institute) Announces Cray EL92™ and Cray EL98™ systems Announces Cray XD1™ supercomputer Signs first chemical industry customer (DuPont) Signs first South African customer Announces Cray XT3™ supercomputer Signs first Australian customer (DoD) (South Africa Weather Service) Announces Cray X1E™ supercomputer 1973 Signs first SE Asian customer Opens business headquarters in Bloomington, MN (Technological Univ. of Malaysia) Announces Cray T3D™ 1987 massively parallel processing (MPP) system 2006 Opens subsidiary in Spain Signs first financial customer (Merrill Lynch) Announces Cray XT4™ supercomputer 1975 America’s Cup-winning yacht “Stars & Stripes” Announces massively multithreaded Powers up first Cray-1™ supercomputer designed on Cray X-MP system Cray XMT™ supercomputer Ships 200th system Exceeds 1 TBps on HPCC benchmark Becomes Fortune 500 company 1994 test on Red Storm system Establishes Cray Academy Announces Wins $200M contract to deliver 1976 Tera Computer Company founded Cray T90™ world’s largest supercomputer to Delivers first Cray-1 system supercomputer, Oak Ridge National Laboratory (ORNL) (Los Alamos National Laboratory) first wireless system Wins $52M contract with National Energy Research Issues first public stock offering Signs first Chinese Scientific Computing Center Receives first official
    [Show full text]
  • Cray XE™ and XK™ Application Programming and Optimization Student Guide
    Cray XE™ and XK™ Application Programming and Optimization Student Guide TR-XEP_BW This document is intended for instructional purposes. Do not use it in place of Cray reference documents. Cray Private © Cray Inc. All Rights Reserved. Unpublished Private Information. This unpublished work is protected by trade secret, copyright, and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced, or disclosed in any form. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE: The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, Cray, Cray Channels, Cray Y-MP, and UNICOS are federally registered trademarks and Active Manager, CCI, CCMT, CF77, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ada, Cray Animation Theater, Cray APP, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C90, C90D, Cray C++ Compiling System, Cray CF90, Cray CX1, Cray
    [Show full text]
  • Workload Management and Application Placement for the Cray Linux Environment
    TM Workload Management and Application Placement for the Cray Linux Environment S–2496–31 © 2010 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray, LibSci, PathScale, and UNICOS are federally registered trademarks and Active Manager, Baker, Cascade, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, Gemini, Libsci, NodeKARE, RapidArray, SeaStar, SeaStar2, SeaStar2+, Threadstorm, UNICOS/lc, UNICOS/mk, and UNICOS/mp are trademarks of Cray Inc.
    [Show full text]
  • Cray Linux Environment™ (CLE) Software Release Overview Supplement
    R Cray Linux Environment™ (CLE) Software Release Overview Supplement S–2497–4201 © 2010–2013 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray and Sonexion are federally registered trademarks and Active Manager, Cascade, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CS300, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XC30, Cray XD1, Cray XE, Cray XEm, Cray XE5, Cray XE5m, Cray XE6, Cray XE6m, Cray XK6, Cray XK6m, Cray XK7, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, LibSci, NodeKARE, RapidArray, The Way to Better Science, Threadstorm, Urika, UNICOS/lc, and YarcData are trademarks of Cray Inc.
    [Show full text]