Cray XD1 Supercomputer

Total Page:16

File Type:pdf, Size:1020Kb

Cray XD1 Supercomputer CRAY XD1 DATASHEET Cray XD1 Supercomputer ■ Purpose-built for HPC — delivers exceptional application The Cray XD1™ supercomputer combines breakthrough performance interconnect, management and reconfigurable computing ■ Affordable power — designed for a broad range of HPC technologies to meet users’ demands for exceptional workloads and budgets performance, reliability and usability. Designed to meet ■ Linux, 32 and 64-bit x86 compatible — runs wide variety of the requirements of highly demanding HPC applications ISV applications and open source codes in fields ranging from product design to weather prediction to ■ Simplified system administration — automates scientific research, the Cray XD1 system is an indispensable configuration and management functions tool for engineers and scientists to simulate and analyze ■ Highly reliable — monitors and maintains system health faster, solve more complex problems, and bring solutions ■ Scalable to hundreds of compute nodes — high bandwidth to market sooner. and low latency let applications scale Direct Connected Processor Architecture Cray XD1 System Highlights The Cray XD1 system is based on the Direct Connected Processor (DCP) architecture, harnessing many processors into a single, unified system to deliver new levels of application performance. Compute Processors Cray’s implementation of the DCP architecture optimizes message-passing applications by 12 AMD Opteron™ 64-bit directly linking processors to each other through a high performance interconnect fabric, processors run Linux and are eliminating shared memory contention and PCI bus bottlenecks. organized as six 2-way SMPs to deliver 58 GFLOPs* per chassis. Finely tuned memory and I/O performance removes bottlenecks and maximizes processor performance. RapidArray™ Interconnect The industry’s fastest embedded switching fabric, the RapidArray interconnect uses 12 custom communications processors and a 96 gigabytes (GB) per second nonblocking switching fabric per chassis to deliver 8 GB per second bandwidth between SMPs with 1.8 microsecond MPI latency. Each chassis presents 24 RapidArray links externally with an aggregate 48 GB per second bandwidth between chassis. Application Acceleration Six Xilinx Virtex-II Pro™ Field Programmable Gate Arrays (FPGAs) per chassis attach to the RapidArray fabric for massively parallel execution of critical algorithm components, promising orders of magnitude performance improvement for target applications. Active Management Highly modular, the Cray XD1 base unit is a chassis. Up to 12 chassis can be installed in a rack. Multirack configurations integrate hundreds of processors into a single system. A management processor in each chassis, a realtime Chassis Each Rack operating system, distributed Compute Processors 12 144 software and an independent Performance 58 GFLOPs* 691 GFLOPs* supervisory network operate together to monitor, control, and Aggregate Switching Capacity 96 GB/s 1152 GB/s manage every aspect of the MPI Interprocessor Latency 1.8 µsec 2.0 µsec computer. The Cray XD1 system Aggregate Memory Bandwidth 77 GB/s 922 GB/s enables single system command and control and provides Maximum Memory 96 GB 1.2 TB extensive high availability features. Maximum Disk Storage 1.5 TB 18 TB * peak theoretical performance with 2.4 GHz AMD Opteron CRAY XD1 DATASHEET The Cray XD1 HPC system is based on the Each chassis includes: RapidArray interconnect delivers outstanding DCP architecture. This innovative new computer • 12 custom communications processors performance for highly parallel applications unifies up to hundreds of processors into • a 96 GB per second nonblocking built upon the common MPI, shmem, and a single, balanced computer, delivering embedded switching fabric Global Arrays standards. exceptional performance, reliability, and • Optimized communications libraries usability. The Cray XD1 architecture includes System Topologies four key subsystems: The Cray XD1 embedded switching fabric • Compute Environment and 24 interchassis links enable a wide variety • RapidArray Interconnect of network topologies. Topologies initially • Application Acceleration 96 GB/s supported include direct connect and fat tree. • Active Management nonblocking switching fabric per chassis Application Acceleration Compute Environment The application acceleration subsystem The Cray XD1 compute subsystem is composed incorporates reconfigurable computing of a high performance Linux operating system RapidArray Communications Processors Tightly coupled to the AMD Opterons and capabilities to deliver superlinear speedup and AMD Opteron 64-bit processors. switching fabric, these processors handle of targeted applications. memory-to-memory copies, global memory Direct Connect Bandwidth Each Cray XD1 chassis can be configured management, and system-wide process The AMD Opteron processor’s integrated with six application acceleration processors, synchronization, freeing the AMD memory controller increases memory Opteron based on Xilinx Virtex-II Pro FPGAs, that can bandwidth and reduces latencies. Three to perform core compute tasks and enabling be programmed to accelerate key algorithms. HyperTransport™ links per processor provide concurrent computing and communication. Well suited to functions such as all_reduce up to 19.2 GB per second I/O bandwidth. The communications processors deliver operations, searching, sorting and signal 8 GB per second bandwidth with 1.8 processing, the application acceleration microsecond MPI latency between SMPs. subsystem acts as a coprocessor to the The communications processors also deliver AMD Opterons, handling the computationally 1.3 microsecond latency on the randomly intensive and highly repetitive algorithms that Linux ordered ring latency test (part of the HPCC can be significantly accelerated through suite). With interconnect bandwidth on par for a wide variety parallel execution. with memory bandwidth, a major system of HPC codes bottleneck is removed for many applications, The application acceleration processors improving performance and greatly simplifying are tightly integrated with Linux and the software development. AMD Opterons and use standard software programming APIs, removing a major obstacle Global Process Synchronization to application speedup. RapidArray Embedded Switching Fabric The system’s Linux scheduler has been A 96 GB per second, nonblocking, crossbar optimized to synchronize processes switching fabric in each chassis provides across the system. By ensuring that an four 2 GB per second links to each two-way application executes in the same timeslot Active Management SMP and twenty-four 2 GB per second system-wide, time spent in wait-states interchassis links. can be minimized, substantially improving The active management subsystem delivers processor efficiency. outstanding usability and reliability through RapidArray Communications Libraries single system command and control and self- The RapidArray communications libraries Standards Based healing capabilities. work in conjunction with the RapidArray The Linux operating system, combined with communications processors to streamline AMD Opteron’s support for 32 and 64-bit x86 Single System Command and Control communications protocols, bypassing the compatible computing, allows users to take The active management system combines Linux kernel where possible and optimizing advantage of a wide range of commercial and partitioning and intelligent self-configuring the compute/communicate interaction. The open-source applications. features to allow administrators to view and manage hundreds of processors as one or more logical computers. Partitions divide the Cray XD1 system RapidArray Interconnect into logical computers. Administrators view The Cray XD1 RapidArray interconnect 1.8 and operate on partitions, rather than on directly connects processors over high-speed, individual SMPs. Self-configuration features low-latency pathways. This DCP architecture microsecond MPI latency automatically translate partition details into eliminates the memory contention and PCI SMP configurations. Transaction processing bus bottlenecks. techniques ensure configuration integrity. Balance The Key to Exceptional Application Performance It takes more than simply adding processors to increase HPC application performance. MPI applications, such as simulation and modeling, are compute- and communicate- intensive. They must perform millions of calculations and then exchange intermediate Single system control is provided over common Fault Detection: In each chassis, a dedicated results before beginning administrative functions: system configuration, management processor with its own supervisory another iteration. monitoring and fault tolerance, software network continuously monitors over 200 critical upgrades, storage management, network hardware functions, including temperatures, To maximize processor management, user management, security, voltages, in-rush currents, parity errors, and performance, this and resource and queue management. These component diagnostics. The management exchange of data must functions may be accessed using the active system also monitors the sanity and operation be fast enough to manager graphical user interface (GUI) or a of the Linux operating system and key internal prevent the compute command line interface (CLI). services such as DNS, NIS, and LDAP. processor from sitting Single system command and control Proactive Management: Sophisticated proactive idle, waiting for data. eliminates common configuration problems, controls
Recommended publications
  • Cray XD1™ Supercomputer Release 1.3
    CRAY XD1 DATASHEET Cray XD1™ Supercomputer Release 1.3 ■ Purpose-built for HPC — delivers exceptional application The Cray XD1 supercomputer combines breakthrough performance interconnect, management and reconfigurable computing ■ Affordable power — designed for a broad range of HPC technologies to meet users’ demands for exceptional workloads and budgets performance, reliability and usability. Designed to meet ■ Linux, 32 and 64-bit x86 compatible — runs a wide variety of ISV the requirements of highly demanding HPC applications applications and open source codes in fields ranging from product design to weather prediction to ■ Simplified system administration — automates configuration and scientific research, the Cray XD1 system is an indispensable management functions tool for engineers and scientists to simulate and analyze ■ Highly reliable — monitors and maintains system health faster, solve more complex problems, and bring solutions ■ Scalable to hundreds of compute nodes — high bandwidth and to market sooner. low latency let applications scale Direct Connected Processor Architecture Cray XD1 System Highlights The Cray XD1 system is based on the Direct Connected Processor (DCP) architecture, harnessing many processors into a single, unified system to deliver new levels of application � � � ���� � � � Compute� � � Processors performance. ���� � Cray’s implementation of the DCP architecture optimizes message-passing applications by 12 AMD Opteron™ 64-bit single directly linking processors to each other through a high performance interconnect fabric, or dual core processors run eliminating shared memory contention and PCI bus bottlenecks. Linux and are organized as six nodes of 2 or 4-way SMPs to deliver up to 106 GFLOPS* per chassis. Matching memory and I/O performance removes bottlenecks and maximizes processor performance.
    [Show full text]
  • Merrimac – High-Performance and Highly-Efficient Scientific Computing with Streams
    MERRIMAC – HIGH-PERFORMANCE AND HIGHLY-EFFICIENT SCIENTIFIC COMPUTING WITH STREAMS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Mattan Erez May 2007 c Copyright by Mattan Erez 2007 All Rights Reserved ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (William J. Dally) Principal Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Patrick M. Hanrahan) I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Mendel Rosenblum) Approved for the University Committee on Graduate Studies. iii iv Abstract Advances in VLSI technology have made the raw ingredients for computation plentiful. Large numbers of fast functional units and large amounts of memory and bandwidth can be made efficient in terms of chip area, cost, and energy, however, high-performance com- puters realize only a small fraction of VLSI’s potential. This dissertation describes the Merrimac streaming supercomputer architecture and system. Merrimac has an integrated view of the applications, software system, compiler, and architecture. We will show how this approach leads to over an order of magnitude gains in performance per unit cost, unit power, and unit floor-space for scientific applications when compared to common scien- tific computers designed around clusters of commodity general-purpose processors.
    [Show full text]
  • Pubtex Output 2006.05.15:1001
    Cray XD1™ Release Description Private S–2453–14 © 2006 Cray Inc. All Rights Reserved. Unpublished Private Information. This unpublished work is protected to trade secret, copyright and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced or disclosed in any form. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, Cray, Cray Channels, Cray Y-MP, GigaRing, LibSci, UNICOS and UNICOS/mk are federally registered trademarks and Active Manager, CCI, CCMT, CF77, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ada, Cray Animation Theater, Cray APP, Cray Apprentice2, Cray C++ Compiling System, Cray C90, Cray C90D, Cray CF90, Cray EL, Cray Fortran Compiler, Cray J90, Cray J90se, Cray J916, Cray J932, Cray MTA, Cray MTA-2, Cray MTX, Cray NQS, Cray Research, Cray SeaStar, Cray S-MP,
    [Show full text]
  • The Gemini Network
    The Gemini Network Rev 1.1 Cray Inc. © 2010 Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced or disclosed in any form. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, Cray, Cray Channels, Cray Y-MP, UNICOS and UNICOS/mk are federally registered trademarks and Active Manager, CCI, CCMT, CF77, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ada, Cray Animation Theater, Cray APP, Cray Apprentice2, Cray C90, Cray C90D, Cray C++ Compiling System, Cray CF90, Cray EL, Cray Fortran Compiler, Cray J90, Cray J90se, Cray J916, Cray J932, Cray MTA, Cray MTA-2, Cray MTX, Cray NQS, Cray Research, Cray SeaStar, Cray SeaStar2, Cray SeaStar2+, Cray SHMEM, Cray S-MP, Cray SSD-T90, Cray SuperCluster, Cray SV1, Cray SV1ex, Cray SX-5, Cray SX-6, Cray T90, Cray T916, Cray T932, Cray T3D, Cray T3D MC, Cray T3D MCA, Cray T3D SC, Cray T3E, Cray Threadstorm, Cray UNICOS, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray X-MP, Cray XMS, Cray XMT, Cray XR1, Cray XT, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray Y-MP EL, Cray-1, Cray-2, Cray-3, CrayDoc, CrayLink, Cray-MP, CrayPacs, CrayPat, CrayPort, Cray/REELlibrarian, CraySoft, CrayTutor, CRInform, CRI/TurboKiva, CSIM, CVT, Delivering the power…, Dgauss, Docview, EMDS, GigaRing, HEXAR, HSX, IOS, ISP/Superlink, LibSci, MPP Apprentice, ND Series Network Disk Array, Network Queuing Environment, Network Queuing Tools, OLNET, RapidArray, RQS, SEGLDR, SMARTE, SSD, SUPERLINK, System Maintenance and Remote Testing Environment, Trusted UNICOS, TurboKiva, UNICOS MAX, UNICOS/lc, and UNICOS/mp are trademarks of Cray Inc.
    [Show full text]
  • INNOVATION for HPC AMD Launches Opteron 6300 Series X86 & Announces 64-Bit ARM Strategy
    HPC Advisory Council Switzerland Conference 2013 ROBERTO DOGNINI – HEAD OF COMMERCIAL SALES EMEA MARCH 2013 AGENDA AMD & HPC Opteron 6300 Is AMD out of the server game ? What’s changing.... and roadmap... Products Q&A 2 INNOVATION FOR HPC AMD Launches Opteron 6300 Series x86 & Announces 64-bit ARM Strategy AMD Launches Worlds First 16- Core x86 Server Processor 2012 AMD Opteron Cray ORNL “Titan” Powers First x86 2011 Ranks #1 on Top500 PetaFlop Supercomputer AMD Achieves 2010 Sixth # 1 Spot in Last AMD Launches First x86 Dual- Five Years AMD Launches Core Opteron 2009 Opteron processor processor • First 64-bit x86 • Direct-Connect 2008 24 of the 50 Fastest 2007 Supercomputers on Top500 using 2006 AMD Opteron™ 2005 Top500 # 1 & 2 AMD Technology 2004 processors Power 2003 IBM “Roadrunner” Fastest #1 Cray “Jaguar” #2 Supercomputers in 11 Countries Cray/Sandia Cray Introduces “Red Storm” Rank #2 Cray “Jaguar” Reaches Cray XD1 based on Top500 with on AMD #1 on Top500 & AMD AMD Opteron™ Technology Powers 4 of Opteron™ processors The Top 5 Systems processor 3 3 TODAY: NEW AMD OPTERON™ 6300 SERIES PROCESSORS OPTIMIZED FOR HPC TCO AMD Opteron™ 6300 Series Processors Scalable Performance Energy Efficient Cost Effective . Scalability under heavy load – . Up to 40% higher performance . Low acquisition costs maintain SLAs at peak times per watt than previous generation . The right performance at the . Record-breaking Java right price performance1 . Flexible power management 4 AMD OPTERON™ BULLDOZER MODULE Dedicated Shared at the Shared at the Components module level chip level Fetch Decode Int Int FP Scheduler Scheduler Scheduler Core 1 Core 2 bit bit - - FMAC FMAC 128 128 Pipeline Pipeline Pipeline Pipeline Pipeline Pipeline Pipeline Pipeline L1 DCache L1 DCache Shared L2 Cache Shared L3 Cache and NB 5 COMPETITIVE PERFORMANCE – 2P SPECFP Perf Per $ 2 x Abu Dhabi 6380 $0.19 2 x SandyBridge E5-2670 $0.16 2 x Abu Dhabi 6386 SE $0.16 2 x SandyBridge E5-2690 $0.12 4 socket...
    [Show full text]
  • Taking the Lead in HPC
    Taking the Lead in HPC Cray X1 Cray XD1 Cray Update SuperComputing 2004 Safe Harbor Statement The statements set forth in this presentation include forward-looking statements that involve risk and uncertainties. The Company wished to caution that a number of factors could cause actual results to differ materially from those in the forward-looking statements. These and other factors which could cause actual results to differ materially from those in the forward-looking statements are discussed in the Company’s filings with the Securities and Exchange Commission. SuperComputing 2004 Copyright Cray Inc. 2 Agenda • Introduction & Overview – Jim Rottsolk • Sales Strategy – Peter Ungaro • Purpose Built Systems - Ly Pham • Future Directions – Burton Smith • Closing Comments – Jim Rottsolk A Long and Proud History NASDAQ: CRAY Headquarters: Seattle, WA Marketplace: High Performance Computing Presence: Systems in over 30 countries Employees: Over 800 BuildingBuilding Systems Systems Engineered Engineered for for Performance Performance Seymour Cray Founded Cray Research 1972 The Father of Supercomputing Cray T3E System (1996) Cray X1 Cray-1 System (1976) Cray-X-MP System (1982) Cray Y-MP System (1988) World’s Most Successful MPP 8 of top 10 Fastest First Supercomputer First Multiprocessor Supercomputer First GFLOP Computer First TFLOP Computer Computer in the World SuperComputing 2004 Copyright Cray Inc. 4 2002 to 2003 – Successful Introduction of X1 Market Share growth exceeded all expectations • 2001 Market:2001 $800M • 2002 Market: about $1B
    [Show full text]
  • Blackwidow System Update Blackwidow ƒ System Update ƒ Performance Update ƒ Programming Environment ƒ Scalar & Vector Computing ƒ Roadmap
    BlackWidow System Update BlackWidow System Update Performance Update Programming Environment Scalar & Vector Computing Roadmap Slide 2 The Cray Roadmap Baker Granite Realizing Our Adaptive 2 0 1 0 Supercomputing Vision Cray XT4+ + Upgrade 20 09 Cray XT4 Phase II: Upgrade Cascade Program Adaptive BlackWidow Cray XMT Hybrid System 20 Cray XT4 08 Cray XT3 Phase I: Rainier Program Multiple Processor Types with 200 7 Integrated Infrastructure and Cray XD1 User Environment Cray X1E 2006 Phase 0: Individually Architected Machines Unique Products Serving Individual Market Needs Slide 3 BlackWidow Summary Project name for Cray’s next generation vector system Follow-on to Cray X1 and Cray X1E vector systems Special focus on improving scalar performance Significantly improved price-performance over earlier systems Closely integrated with Cray XT infrastructure Product will be launched and shipments will begin towards the end of this year “Vector“Vector MPP”MPP” System System •• HighlyHighly scalablescalable –– to to thousandsthousands ofof CPUsCPUs •• HighHigh networknetwork bandwidthbandwidth •• MinimalMinimal OSOS “jitter”“jitter” on on computecompute bladesblades Slide 4 BlackWidow System Overview Tightly integrated with Cray XT3 or Cray XT4 system Cray XT SIO nodes provide I/O and login support for BlackWidow Users can easily run on either or both vector and scalar compute blades BlackWidow cabinets attached to Cray XT4 system (artist’s conception) Slide 5 Compute Cabinet Blower Rank1 router modules Compute • Up to 4 Chassis and
    [Show full text]
  • CUG 09-Program-042109-Fix NERSC.Indd
    image courtesy of Sean Ahern (ORNL)•simulation byFredJaeger(ORNL) •radio-frequencyheating ofNSTXfusionplasma image courtesy ofSean cug 2009• atlanta •may 4-7 FUTURE THE COMPUTE Welcome Dear Friends, The National Center for Computational Sciences at Oak Ridge National Laboratory (ORNL) and the National Institute for Computational Sciences (NICS) are pleased to host CUG 2009. Much has changed since ORNL was the host in Knoxville for CUG 2004. The Cray XT line has really taken off, and parallelism has skyrocketed. Computational scientists and engineers are scrambling to take advantage of our potent new tools. But why? Why do we invest in these giant systems with many thousands of processors, cables, and disks? Why do we struggle to engineer the increasingly complex software required to harness their power? What do we expect to do with all this hardware, software, and electricity? Our theme for CUG 2009 is an answer to these questions... Compute the Future This is what many Cray users do. They design the next great superconductor, aircraft, and fusion-energy plant. They predict the efficiency of a biofuel, the track of an approaching hurricane, and the weather patterns that our grandchildren will encounter. They compute to know now, because they should not or must not wait. We welcome you to the OMNI Hotel in Atlanta for CUG 2009. This beautiful hotel and conference facility are in the world-famous CNN Center, overlooking Centennial Olympic Park, and within walking distance of a myriad of restaurants and attractions. We hope the convenience and excitement of Atlanta, the skills and attention of Cray experts, and the shared wisdom of your CUG colleagues will convince you.
    [Show full text]
  • Early Evaluation of the Cray XT3 at ORNL,” Proc
    Early Evaluation of the Cray XT3 S. R. Alam, T. H. Dunigan, Jr, M. R. Fahey, P. C. Roth, J. S. Vetter, P. H. Worley Oak Ridge National Laboratory Oak Ridge, TN, USA 37831 http://www.csm.ornl.gov/ft http://www.csm.ornl.gov/evaluations http://www.nccs.gov Acknowledgments Ö Cray – Supercomputing Center of Excellence – Jeff Beckleheimer, John Levesque, Luiz DeRose, Nathan Wichmann, and Jim Schwarzmeier Ö Sandia National Lab – Ongoing collaboration Ö Pittsburgh Supercomputing Center – Exchanged early access accounts to promote cross testing and experiences Ö ORNL staff – Don Maxwell Ö This research was sponsored by the Office of Mathematical, Information, and Computational Sciences, Office of Science, U.S. Department of Energy under Contract No. DE-AC05-00OR22725 with UT-Batelle, LLC. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes. 2 NCCS Resources September 2005 Control Network Summary 1 GigE 10 GigE Routers Network UltraScience 7 Systems Cray XT3 Cray X1E SGI Altix IBM SP4 IBM Linux Visualization IBM Jaguar Phoenix Ram Cheetah NSTG Cluster HPSS Supercomputers 7,622 CPUs 16TB Memory 45 TFlops 5294 2.4GHz 1024 .5GHz (256) 1.5GHz (864) 1.3GHz (56) 3GHz (128) 2.2GHz Many Storage 11TB Memory 2 TB Memory 2TB Memory 1.1TB Memory 76GB Memory 128GB Memory Devices Supported d e k r s i a 9TB h D 120TB 32TB 36TB 32TB 4.5TB 5TB 238.5 TB S Scientific Visualization Lab Test Systems Evaluation Platforms
    [Show full text]
  • Cray Linux Environment™ (CLE) 4.0 Software Release Overview Supplement
    TM Cray Linux Environment™ (CLE) 4.0 Software Release Overview Supplement S–2497–4002 © 2010, 2011 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray, LibSci, and PathScale are federally registered trademarks and Active Manager, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray XE, Cray XEm, Cray XE5, Cray XE5m, Cray XE6, Cray XE6m, Cray XK6, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, Gemini, Libsci, NodeKARE, RapidArray, SeaStar, SeaStar2, SeaStar2+, Sonexion, The Way to Better Science, Threadstorm, uRiKA, and UNICOS/lc are trademarks of Cray Inc.
    [Show full text]
  • Corporate Update “Jim's Top Ten”
    Corporate Update “Jim’s Top Ten” Jim Rottsolk Chairman, President & CEO Cray Proprietary Reason #10 Cray is Building a Workforce to be Successful • Grew staff to over 950, worldwide – Added key senior management • Peter Ungaro, V.P. of Sales & Marketing • Ulla Thiel, Director of Sales, EMEA • Ly Pham, V.P. of Software Engineering • Gary Geissler, Senior Project Director • Reorganized for Success – Created product strategy team – Engineering organized to leverage strengths – Integrated worldwide applications, benchmarking, and pre- sales May 2004 Cray Proprietary Slide 2 Reason #9 An eXtreme Family of Products • 1 to 50+ TFLOPS • $3 M - $10 M+ • Vector apps • Cray MSP/UNICOS/mp Cray X1 • 1 to 50+ TFLOPS • 256 – 10,000+ processors • $1 M - $5 M+ • Opteron/Linux/Catamount Red Storm • 48 GFLOPS - 1.2+ TFLOPS • 12 – 288+ processors • $50 K - $2 M+ • AMD Opteron/Linux Cray XD1 May 2004 Cray Proprietary Slide 3 Reason #8 Cray Technologies Pushing Science • Cray-ORNL Selected for DOE Contract • Provide 250-teraflop capability by 2007 to help maintain U.S. science leadership – World’s most powerful scientific computer! – Includes X1 and ‘Red Storm’ product lines • Value: $25M 2004, $100M+ during contract – Subject to federal funding after 2004 • A major milestone for Cray’s renewed HPC leadership – Another victory for our purpose-built systems – A 2nd major DOE contract (with Sandia Red Storm) May 2004 Cray Proprietary Slide 4 Reason #7 Sandia’s Red Storm Project leading to re- emergence of the highly successful MPP architecture. May 2004 Cray
    [Show full text]
  • Infiniband and 10-Gigabit Ethernet for Dummies
    Designing Cloud and Grid Computing Systems with InfiniBand and High-Speed Ethernet A Tutorial at CCGrid ’11 by Dhabaleswar K. (DK) Panda Sayantan Sur The Ohio State University The Ohio State University E-mail: [email protected] E-mail: [email protected] http://www.cse.ohio-state.edu/~panda http://www.cse.ohio-state.edu/~surs Presentation Overview • Introduction • Why InfiniBand and High-speed Ethernet? • Overview of IB, HSE, their Convergence and Features • IB and HSE HW/SW Products and Installations • Sample Case Studies and Performance Numbers • Conclusions and Final Q&A CCGrid '11 2 Current and Next Generation Applications and Computing Systems • Diverse Range of Applications – Processing and dataset characteristics vary • Growth of High Performance Computing – Growth in processor performance • Chip density doubles every 18 months – Growth in commodity networking • Increase in speed/features + reducing cost • Different Kinds of Systems – Clusters, Grid, Cloud, Datacenters, ….. CCGrid '11 3 Cluster Computing Environment Compute cluster Frontend LAN LAN Storage cluster Compute Meta-Data Meta Node Manager Data Compute I/O Server L Data Node A Node Compute N I/O Server Node Node Data Compute I/O Server Node Node Data CCGrid '11 4 Trends for Computing Clusters in the Top 500 List (http://www.top500.org) Nov. 1996: 0/500 (0%) Nov. 2001: 43/500 (8.6%) Nov. 2006: 361/500 (72.2%) Jun. 1997: 1/500 (0.2%) Jun. 2002: 80/500 (16%) Jun. 2007: 373/500 (74.6%) Nov. 1997: 1/500 (0.2%) Nov. 2002: 93/500 (18.6%) Nov. 2007: 406/500 (81.2%) Jun.
    [Show full text]