Cray XD1™ Supercomputer Release 1.3

Total Page:16

File Type:pdf, Size:1020Kb

Cray XD1™ Supercomputer Release 1.3 CRAY XD1 DATASHEET Cray XD1™ Supercomputer Release 1.3 ■ Purpose-built for HPC — delivers exceptional application The Cray XD1 supercomputer combines breakthrough performance interconnect, management and reconfigurable computing ■ Affordable power — designed for a broad range of HPC technologies to meet users’ demands for exceptional workloads and budgets performance, reliability and usability. Designed to meet ■ Linux, 32 and 64-bit x86 compatible — runs a wide variety of ISV the requirements of highly demanding HPC applications applications and open source codes in fields ranging from product design to weather prediction to ■ Simplified system administration — automates configuration and scientific research, the Cray XD1 system is an indispensable management functions tool for engineers and scientists to simulate and analyze ■ Highly reliable — monitors and maintains system health faster, solve more complex problems, and bring solutions ■ Scalable to hundreds of compute nodes — high bandwidth and to market sooner. low latency let applications scale Direct Connected Processor Architecture Cray XD1 System Highlights The Cray XD1 system is based on the Direct Connected Processor (DCP) architecture, harnessing many processors into a single, unified system to deliver new levels of application � � � ���� � � � Compute� � � Processors performance. ���� � Cray’s implementation of the DCP architecture optimizes message-passing applications by 12 AMD Opteron™ 64-bit single directly linking processors to each other through a high performance interconnect fabric, or dual core processors run eliminating shared memory contention and PCI bus bottlenecks. Linux and are organized as six nodes of 2 or 4-way SMPs to deliver up to 106 GFLOPS* per chassis. Matching memory and I/O performance removes bottlenecks and maximizes processor performance. � � � ���� � � � R� �api� dArray™ Interconnect ���� � The industry’s fastest embedded switching fabric, the RapidArray interconnect uses 12 custom communications processors and a 96 GigaBytes (GB) per second nonblocking switching fabric per chassis to deliver 8 GB per second bandwidth �� between nodes with 1.7 microsecond MPI latency. Each � � chassis presents 24 RapidArray � ���� � � � � � � links externally with an � �� � � aggregate 48 GB per second bandwidth between chassis. � � � ���� � � � � � � Application Acceleration ���� � ������������������������������ Six Xilinx Virtex-4 Field Programmable Gate Arrays (FPGAs) per chassis attach to the RapidArray fabric for massively parallel execution of critical algorithm components, promising orders of magnitude performance improvement for target applications. � � � ���� � Highly modular, the Cray XD1 base unit is a chassis. Up to 12 chassis can be installed in a � � � � � Active Management � �� � cabinet. Multi-cabinet configurations integrate hundreds of processors into a single system. � A management processor Chassis Each Cabinet in each chassis, a realtime Compute Processors 12 144 operating system, distributed software and an independent Performance 106 GFLOPS* 1.27 TFLOPS* supervisory network operate Aggregate Switching Capacity 96 GB/s 1152 GB/s together to monitor, control and MPI Interprocessor Latency 1.7 µsec 2.0 µsec manage every aspect of the Aggregate Memory Bandwidth 77 GB/s 922 GB/s computer. The Cray XD1 system enables single system command Maximum Memory 96 GB 1.2 TB and control and provides Maximum Local Disk Storage 1.5 TB 18 TB extensive high availability features. * peak theoretical performance with 2.2 GHz Dual Core AMD Opteron processors CRAY XD1 DATASHEET The Cray XD1 HPC system is based on the � � � ���� � � � R� �api� dArray Interconnect DCP architecture. This innovative new computer ���� � unifies up to hundreds of processors into The Cray XD1 RapidArray interconnect a single, balanced computer, delivering directly connects processors over high-speed, exceptional performance, reliability and 1.7 low-latency pathways. This DCP architecture usability. The Cray XD1 architecture includes eliminates the memory contention and PCI microsecond MPI latency four key subsystems: bus bottlenecks. • Compute Environment • RapidArray Interconnect Each fully configured chassis includes: • Application Acceleration • 12 custom communications processors System Topologies • Active Management • a 96 GB per second nonblocking The Cray XD1 embedded switching fabric embedded switch fabric, which includes and 24 interchassis links enable a wide variety main and optional expansion fabrics � � of network topologies. Topologies initially � ���� � � � • Optimized communications libraries Compute� � � Environment ���� � supported include direct connect and fat tree. The Cray XD1 compute subsystem is composed A fat tree topology is ideal for larger systems of a high-performance Linux operating system (or small systems that are expected to grow) and AMD Opteron 64-bit processors. 96 GB/s that run applications that use all-to-one, one- to-all, or all-to-all communications. Systems up Direct Connect Bandwidth to 24 chassis can be supported with a single The AMD Opteron processor’s integrated nonblocking switching layer fat tree. Each half of the RapidArray fab- memory controller increases memory fabric per chassis ric forms a separate fat tree, with its own set bandwidth and reduces latencies. Three of external switches. HyperTransport™ links per processor provide up to 19.2 GB per second I/O bandwidth. RapidArray Communications Processors Lustre Global Parallel File System The Lustre parallel file system provides a high- Dual Core Processors Tightly coupled to the AMD Opteron performance, high-availability, object-based Dual Core AMD Opteron processors are processors and switching fabric, the storage architecture for the Cray XD1 system available as an option on the Cray XD1 RapidArray processors offload and accelerate and can scale to thousands of nodes. Lustre system. Two processing cores exist on a single communications functions from the Opteron runs natively across the Cray XD1 RapidArray die, doubling peak performance, without processor, freeing it to perform core compute interconnect, avoiding TCP/IP overheads. raising power consumption and heat levels. tasks and enabling concurrent computing and communication. The communications processors deliver up to � � � ���� � 8 GB per second bandwidth with 1.7 � � � � � Application Acceleration ���� � microsecond MPI latency between nodes. The communications processors also deliver The application acceleration subsystem Linux 1.1 microsecond latency on the randomly incorporates reconfigurable computing capabilities to deliver superlinear speedup for a wide variety ordered ring latency test, part of the HPC Challenge (HPCC) benchmark suite. With of targeted applications. of HPC codes interconnect bandwidth on par with memory Each Cray XD1 chassis can be configured bandwidth, a major system bottleneck is removed with six application acceleration processors, for many applications, improving performance Global Process Synchronization based on Xilinx Virtex-4 FPGAs, that can be and greatly simplifying software development. Optimizations to the Linux scheduler result in programmed to accelerate key algorithms. the synchronization of process execution RapidArray Embedded Switching Fabric Well suited to functions such as all_reduce across the system, minimizing time spent by operations, searching, sorting and signal applications at barriers, and increasing A 96 GB per second, nonblocking, crossbar switching fabric in each chassis provides processing, the application acceleration application performance as much as 50 subsystem acts as a coprocessor to the percent. four 2 GB per second links to each node and twenty-four 2 GB per second AMD Opteron processors, handling the interchassis links. computationally intensive and highly repetitive Standards Based algorithms that can be significantly The Linux operating system, combined with the accelerated through parallel execution. AMD Opteron processor’s support for 32 and RapidArray Communications Libraries 64-bit x86 compatible computing, allows users The RapidArray communications libraries The application acceleration processors to take advantage of a wide range of work in conjunction with the RapidArray are tightly integrated with Linux and the commercial and open-source applications. communications processors to streamline AMD Opteron processors and use standard communications protocols, bypassing the software programming APIs, removing a major Linux kernel where possible and optimizing obstacle to application speedup. the compute/communicate interaction. The RapidArray interconnect delivers outstanding performance for highly parallel applications built upon the MPI, shmem, and Global Arrays standards. Balance The Key to Exceptional Application Performance It takes more than simply adding processors to increase HPC application performance. MPI applications, such as simulation and modeling, are compute- and � � Self-Healing communicate- � ���� � � � � � � Active Management ���� � The Cray XD1 system provides extensive fault intensive. They must The active management subsystem delivers detection, isolation, and prediction capabilities, perform millions of coupled with automated proactive and reactive outstanding usability and reliability through calculations and then self-healing intelligence. single system command and control and self- exchange intermediate healing capabilities. results before beginning another iteration. Single System Command and Control The active management
Recommended publications
  • Merrimac – High-Performance and Highly-Efficient Scientific Computing with Streams
    MERRIMAC – HIGH-PERFORMANCE AND HIGHLY-EFFICIENT SCIENTIFIC COMPUTING WITH STREAMS A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Mattan Erez May 2007 c Copyright by Mattan Erez 2007 All Rights Reserved ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (William J. Dally) Principal Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Patrick M. Hanrahan) I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. (Mendel Rosenblum) Approved for the University Committee on Graduate Studies. iii iv Abstract Advances in VLSI technology have made the raw ingredients for computation plentiful. Large numbers of fast functional units and large amounts of memory and bandwidth can be made efficient in terms of chip area, cost, and energy, however, high-performance com- puters realize only a small fraction of VLSI’s potential. This dissertation describes the Merrimac streaming supercomputer architecture and system. Merrimac has an integrated view of the applications, software system, compiler, and architecture. We will show how this approach leads to over an order of magnitude gains in performance per unit cost, unit power, and unit floor-space for scientific applications when compared to common scien- tific computers designed around clusters of commodity general-purpose processors.
    [Show full text]
  • Pubtex Output 2006.05.15:1001
    Cray XD1™ Release Description Private S–2453–14 © 2006 Cray Inc. All Rights Reserved. Unpublished Private Information. This unpublished work is protected to trade secret, copyright and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced or disclosed in any form. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, Cray, Cray Channels, Cray Y-MP, GigaRing, LibSci, UNICOS and UNICOS/mk are federally registered trademarks and Active Manager, CCI, CCMT, CF77, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ada, Cray Animation Theater, Cray APP, Cray Apprentice2, Cray C++ Compiling System, Cray C90, Cray C90D, Cray CF90, Cray EL, Cray Fortran Compiler, Cray J90, Cray J90se, Cray J916, Cray J932, Cray MTA, Cray MTA-2, Cray MTX, Cray NQS, Cray Research, Cray SeaStar, Cray S-MP,
    [Show full text]
  • The Gemini Network
    The Gemini Network Rev 1.1 Cray Inc. © 2010 Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced or disclosed in any form. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, Cray, Cray Channels, Cray Y-MP, UNICOS and UNICOS/mk are federally registered trademarks and Active Manager, CCI, CCMT, CF77, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ada, Cray Animation Theater, Cray APP, Cray Apprentice2, Cray C90, Cray C90D, Cray C++ Compiling System, Cray CF90, Cray EL, Cray Fortran Compiler, Cray J90, Cray J90se, Cray J916, Cray J932, Cray MTA, Cray MTA-2, Cray MTX, Cray NQS, Cray Research, Cray SeaStar, Cray SeaStar2, Cray SeaStar2+, Cray SHMEM, Cray S-MP, Cray SSD-T90, Cray SuperCluster, Cray SV1, Cray SV1ex, Cray SX-5, Cray SX-6, Cray T90, Cray T916, Cray T932, Cray T3D, Cray T3D MC, Cray T3D MCA, Cray T3D SC, Cray T3E, Cray Threadstorm, Cray UNICOS, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray X-MP, Cray XMS, Cray XMT, Cray XR1, Cray XT, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray Y-MP EL, Cray-1, Cray-2, Cray-3, CrayDoc, CrayLink, Cray-MP, CrayPacs, CrayPat, CrayPort, Cray/REELlibrarian, CraySoft, CrayTutor, CRInform, CRI/TurboKiva, CSIM, CVT, Delivering the power…, Dgauss, Docview, EMDS, GigaRing, HEXAR, HSX, IOS, ISP/Superlink, LibSci, MPP Apprentice, ND Series Network Disk Array, Network Queuing Environment, Network Queuing Tools, OLNET, RapidArray, RQS, SEGLDR, SMARTE, SSD, SUPERLINK, System Maintenance and Remote Testing Environment, Trusted UNICOS, TurboKiva, UNICOS MAX, UNICOS/lc, and UNICOS/mp are trademarks of Cray Inc.
    [Show full text]
  • INNOVATION for HPC AMD Launches Opteron 6300 Series X86 & Announces 64-Bit ARM Strategy
    HPC Advisory Council Switzerland Conference 2013 ROBERTO DOGNINI – HEAD OF COMMERCIAL SALES EMEA MARCH 2013 AGENDA AMD & HPC Opteron 6300 Is AMD out of the server game ? What’s changing.... and roadmap... Products Q&A 2 INNOVATION FOR HPC AMD Launches Opteron 6300 Series x86 & Announces 64-bit ARM Strategy AMD Launches Worlds First 16- Core x86 Server Processor 2012 AMD Opteron Cray ORNL “Titan” Powers First x86 2011 Ranks #1 on Top500 PetaFlop Supercomputer AMD Achieves 2010 Sixth # 1 Spot in Last AMD Launches First x86 Dual- Five Years AMD Launches Core Opteron 2009 Opteron processor processor • First 64-bit x86 • Direct-Connect 2008 24 of the 50 Fastest 2007 Supercomputers on Top500 using 2006 AMD Opteron™ 2005 Top500 # 1 & 2 AMD Technology 2004 processors Power 2003 IBM “Roadrunner” Fastest #1 Cray “Jaguar” #2 Supercomputers in 11 Countries Cray/Sandia Cray Introduces “Red Storm” Rank #2 Cray “Jaguar” Reaches Cray XD1 based on Top500 with on AMD #1 on Top500 & AMD AMD Opteron™ Technology Powers 4 of Opteron™ processors The Top 5 Systems processor 3 3 TODAY: NEW AMD OPTERON™ 6300 SERIES PROCESSORS OPTIMIZED FOR HPC TCO AMD Opteron™ 6300 Series Processors Scalable Performance Energy Efficient Cost Effective . Scalability under heavy load – . Up to 40% higher performance . Low acquisition costs maintain SLAs at peak times per watt than previous generation . The right performance at the . Record-breaking Java right price performance1 . Flexible power management 4 AMD OPTERON™ BULLDOZER MODULE Dedicated Shared at the Shared at the Components module level chip level Fetch Decode Int Int FP Scheduler Scheduler Scheduler Core 1 Core 2 bit bit - - FMAC FMAC 128 128 Pipeline Pipeline Pipeline Pipeline Pipeline Pipeline Pipeline Pipeline L1 DCache L1 DCache Shared L2 Cache Shared L3 Cache and NB 5 COMPETITIVE PERFORMANCE – 2P SPECFP Perf Per $ 2 x Abu Dhabi 6380 $0.19 2 x SandyBridge E5-2670 $0.16 2 x Abu Dhabi 6386 SE $0.16 2 x SandyBridge E5-2690 $0.12 4 socket...
    [Show full text]
  • Taking the Lead in HPC
    Taking the Lead in HPC Cray X1 Cray XD1 Cray Update SuperComputing 2004 Safe Harbor Statement The statements set forth in this presentation include forward-looking statements that involve risk and uncertainties. The Company wished to caution that a number of factors could cause actual results to differ materially from those in the forward-looking statements. These and other factors which could cause actual results to differ materially from those in the forward-looking statements are discussed in the Company’s filings with the Securities and Exchange Commission. SuperComputing 2004 Copyright Cray Inc. 2 Agenda • Introduction & Overview – Jim Rottsolk • Sales Strategy – Peter Ungaro • Purpose Built Systems - Ly Pham • Future Directions – Burton Smith • Closing Comments – Jim Rottsolk A Long and Proud History NASDAQ: CRAY Headquarters: Seattle, WA Marketplace: High Performance Computing Presence: Systems in over 30 countries Employees: Over 800 BuildingBuilding Systems Systems Engineered Engineered for for Performance Performance Seymour Cray Founded Cray Research 1972 The Father of Supercomputing Cray T3E System (1996) Cray X1 Cray-1 System (1976) Cray-X-MP System (1982) Cray Y-MP System (1988) World’s Most Successful MPP 8 of top 10 Fastest First Supercomputer First Multiprocessor Supercomputer First GFLOP Computer First TFLOP Computer Computer in the World SuperComputing 2004 Copyright Cray Inc. 4 2002 to 2003 – Successful Introduction of X1 Market Share growth exceeded all expectations • 2001 Market:2001 $800M • 2002 Market: about $1B
    [Show full text]
  • Blackwidow System Update Blackwidow ƒ System Update ƒ Performance Update ƒ Programming Environment ƒ Scalar & Vector Computing ƒ Roadmap
    BlackWidow System Update BlackWidow System Update Performance Update Programming Environment Scalar & Vector Computing Roadmap Slide 2 The Cray Roadmap Baker Granite Realizing Our Adaptive 2 0 1 0 Supercomputing Vision Cray XT4+ + Upgrade 20 09 Cray XT4 Phase II: Upgrade Cascade Program Adaptive BlackWidow Cray XMT Hybrid System 20 Cray XT4 08 Cray XT3 Phase I: Rainier Program Multiple Processor Types with 200 7 Integrated Infrastructure and Cray XD1 User Environment Cray X1E 2006 Phase 0: Individually Architected Machines Unique Products Serving Individual Market Needs Slide 3 BlackWidow Summary Project name for Cray’s next generation vector system Follow-on to Cray X1 and Cray X1E vector systems Special focus on improving scalar performance Significantly improved price-performance over earlier systems Closely integrated with Cray XT infrastructure Product will be launched and shipments will begin towards the end of this year “Vector“Vector MPP”MPP” System System •• HighlyHighly scalablescalable –– to to thousandsthousands ofof CPUsCPUs •• HighHigh networknetwork bandwidthbandwidth •• MinimalMinimal OSOS “jitter”“jitter” on on computecompute bladesblades Slide 4 BlackWidow System Overview Tightly integrated with Cray XT3 or Cray XT4 system Cray XT SIO nodes provide I/O and login support for BlackWidow Users can easily run on either or both vector and scalar compute blades BlackWidow cabinets attached to Cray XT4 system (artist’s conception) Slide 5 Compute Cabinet Blower Rank1 router modules Compute • Up to 4 Chassis and
    [Show full text]
  • CUG 09-Program-042109-Fix NERSC.Indd
    image courtesy of Sean Ahern (ORNL)•simulation byFredJaeger(ORNL) •radio-frequencyheating ofNSTXfusionplasma image courtesy ofSean cug 2009• atlanta •may 4-7 FUTURE THE COMPUTE Welcome Dear Friends, The National Center for Computational Sciences at Oak Ridge National Laboratory (ORNL) and the National Institute for Computational Sciences (NICS) are pleased to host CUG 2009. Much has changed since ORNL was the host in Knoxville for CUG 2004. The Cray XT line has really taken off, and parallelism has skyrocketed. Computational scientists and engineers are scrambling to take advantage of our potent new tools. But why? Why do we invest in these giant systems with many thousands of processors, cables, and disks? Why do we struggle to engineer the increasingly complex software required to harness their power? What do we expect to do with all this hardware, software, and electricity? Our theme for CUG 2009 is an answer to these questions... Compute the Future This is what many Cray users do. They design the next great superconductor, aircraft, and fusion-energy plant. They predict the efficiency of a biofuel, the track of an approaching hurricane, and the weather patterns that our grandchildren will encounter. They compute to know now, because they should not or must not wait. We welcome you to the OMNI Hotel in Atlanta for CUG 2009. This beautiful hotel and conference facility are in the world-famous CNN Center, overlooking Centennial Olympic Park, and within walking distance of a myriad of restaurants and attractions. We hope the convenience and excitement of Atlanta, the skills and attention of Cray experts, and the shared wisdom of your CUG colleagues will convince you.
    [Show full text]
  • Early Evaluation of the Cray XT3 at ORNL,” Proc
    Early Evaluation of the Cray XT3 S. R. Alam, T. H. Dunigan, Jr, M. R. Fahey, P. C. Roth, J. S. Vetter, P. H. Worley Oak Ridge National Laboratory Oak Ridge, TN, USA 37831 http://www.csm.ornl.gov/ft http://www.csm.ornl.gov/evaluations http://www.nccs.gov Acknowledgments Ö Cray – Supercomputing Center of Excellence – Jeff Beckleheimer, John Levesque, Luiz DeRose, Nathan Wichmann, and Jim Schwarzmeier Ö Sandia National Lab – Ongoing collaboration Ö Pittsburgh Supercomputing Center – Exchanged early access accounts to promote cross testing and experiences Ö ORNL staff – Don Maxwell Ö This research was sponsored by the Office of Mathematical, Information, and Computational Sciences, Office of Science, U.S. Department of Energy under Contract No. DE-AC05-00OR22725 with UT-Batelle, LLC. Accordingly, the U.S. Government retains a non-exclusive, royalty-free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes. 2 NCCS Resources September 2005 Control Network Summary 1 GigE 10 GigE Routers Network UltraScience 7 Systems Cray XT3 Cray X1E SGI Altix IBM SP4 IBM Linux Visualization IBM Jaguar Phoenix Ram Cheetah NSTG Cluster HPSS Supercomputers 7,622 CPUs 16TB Memory 45 TFlops 5294 2.4GHz 1024 .5GHz (256) 1.5GHz (864) 1.3GHz (56) 3GHz (128) 2.2GHz Many Storage 11TB Memory 2 TB Memory 2TB Memory 1.1TB Memory 76GB Memory 128GB Memory Devices Supported d e k r s i a 9TB h D 120TB 32TB 36TB 32TB 4.5TB 5TB 238.5 TB S Scientific Visualization Lab Test Systems Evaluation Platforms
    [Show full text]
  • Cray Linux Environment™ (CLE) 4.0 Software Release Overview Supplement
    TM Cray Linux Environment™ (CLE) 4.0 Software Release Overview Supplement S–2497–4002 © 2010, 2011 Cray Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Cray Inc. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Cray, LibSci, and PathScale are federally registered trademarks and Active Manager, Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray CX, Cray CX1, Cray CX1-iWS, Cray CX1-LC, Cray CX1000, Cray CX1000-C, Cray CX1000-G, Cray CX1000-S, Cray CX1000-SC, Cray CX1000-SM, Cray CX1000-HN, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray XE, Cray XEm, Cray XE5, Cray XE5m, Cray XE6, Cray XE6m, Cray XK6, Cray XMT, Cray XR1, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, ECOphlex, Gemini, Libsci, NodeKARE, RapidArray, SeaStar, SeaStar2, SeaStar2+, Sonexion, The Way to Better Science, Threadstorm, uRiKA, and UNICOS/lc are trademarks of Cray Inc.
    [Show full text]
  • Corporate Update “Jim's Top Ten”
    Corporate Update “Jim’s Top Ten” Jim Rottsolk Chairman, President & CEO Cray Proprietary Reason #10 Cray is Building a Workforce to be Successful • Grew staff to over 950, worldwide – Added key senior management • Peter Ungaro, V.P. of Sales & Marketing • Ulla Thiel, Director of Sales, EMEA • Ly Pham, V.P. of Software Engineering • Gary Geissler, Senior Project Director • Reorganized for Success – Created product strategy team – Engineering organized to leverage strengths – Integrated worldwide applications, benchmarking, and pre- sales May 2004 Cray Proprietary Slide 2 Reason #9 An eXtreme Family of Products • 1 to 50+ TFLOPS • $3 M - $10 M+ • Vector apps • Cray MSP/UNICOS/mp Cray X1 • 1 to 50+ TFLOPS • 256 – 10,000+ processors • $1 M - $5 M+ • Opteron/Linux/Catamount Red Storm • 48 GFLOPS - 1.2+ TFLOPS • 12 – 288+ processors • $50 K - $2 M+ • AMD Opteron/Linux Cray XD1 May 2004 Cray Proprietary Slide 3 Reason #8 Cray Technologies Pushing Science • Cray-ORNL Selected for DOE Contract • Provide 250-teraflop capability by 2007 to help maintain U.S. science leadership – World’s most powerful scientific computer! – Includes X1 and ‘Red Storm’ product lines • Value: $25M 2004, $100M+ during contract – Subject to federal funding after 2004 • A major milestone for Cray’s renewed HPC leadership – Another victory for our purpose-built systems – A 2nd major DOE contract (with Sandia Red Storm) May 2004 Cray Proprietary Slide 4 Reason #7 Sandia’s Red Storm Project leading to re- emergence of the highly successful MPP architecture. May 2004 Cray
    [Show full text]
  • Infiniband and 10-Gigabit Ethernet for Dummies
    Designing Cloud and Grid Computing Systems with InfiniBand and High-Speed Ethernet A Tutorial at CCGrid ’11 by Dhabaleswar K. (DK) Panda Sayantan Sur The Ohio State University The Ohio State University E-mail: [email protected] E-mail: [email protected] http://www.cse.ohio-state.edu/~panda http://www.cse.ohio-state.edu/~surs Presentation Overview • Introduction • Why InfiniBand and High-speed Ethernet? • Overview of IB, HSE, their Convergence and Features • IB and HSE HW/SW Products and Installations • Sample Case Studies and Performance Numbers • Conclusions and Final Q&A CCGrid '11 2 Current and Next Generation Applications and Computing Systems • Diverse Range of Applications – Processing and dataset characteristics vary • Growth of High Performance Computing – Growth in processor performance • Chip density doubles every 18 months – Growth in commodity networking • Increase in speed/features + reducing cost • Different Kinds of Systems – Clusters, Grid, Cloud, Datacenters, ….. CCGrid '11 3 Cluster Computing Environment Compute cluster Frontend LAN LAN Storage cluster Compute Meta-Data Meta Node Manager Data Compute I/O Server L Data Node A Node Compute N I/O Server Node Node Data Compute I/O Server Node Node Data CCGrid '11 4 Trends for Computing Clusters in the Top 500 List (http://www.top500.org) Nov. 1996: 0/500 (0%) Nov. 2001: 43/500 (8.6%) Nov. 2006: 361/500 (72.2%) Jun. 1997: 1/500 (0.2%) Jun. 2002: 80/500 (16%) Jun. 2007: 373/500 (74.6%) Nov. 1997: 1/500 (0.2%) Nov. 2002: 93/500 (18.6%) Nov. 2007: 406/500 (81.2%) Jun.
    [Show full text]
  • Privacy Preserving Computations Accelerated Using FPGA Overlays
    Privacy Preserving Computations Accelerated using FPGA Overlays A Dissertation Presented by Xin Fang to The Department of Electrical and Computer Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Engineering Northeastern University Boston, Massachusetts August 2017 ProQuest Number:10623795 All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion. ProQuest 10623795 Published by ProQuest LLC ( 2017). Copyright of the Dissertation is held by the Author. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code Microform Edition © ProQuest LLC. ProQuest LLC. 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, MI 48106 - 1346 To my family. i Contents List of Figures v List of Tables vi Acknowledgments vii Abstract of the Dissertation ix 1 Introduction 1 1.1 Garbled Circuits . 1 1.1.1 An Example: Computing Average Blood Pressure . 2 1.2 Heterogeneous Reconfigurable Computing . 3 1.3 Contributions . 3 1.4 Remainder of the Dissertation . 5 2 Background 6 2.1 Garbled Circuits . 6 2.1.1 Garbled Circuits Overview . 7 2.1.2 Garbling Phase . 8 2.1.3 Evaluation Phase . 10 2.1.4 Optimization . 11 2.2 SHA-1 Algorithm . 12 2.3 Field-Programmable Gate Array . 13 2.3.1 FPGA Architecture . 13 2.3.2 FPGA Overlays .
    [Show full text]