Lessons Learned in Deploying the World's Largest Scale Lustre File

Total Page:16

File Type:pdf, Size:1020Kb

Lessons Learned in Deploying the World's Largest Scale Lustre File Lessons Learned in Deploying the World’s Largest Scale Lustre File System Galen M. Shipman, David A. Dillow, Sarp Oral, Feiyi Wang, Douglas Fuller, Jason Hill, Zhe Zhang Oak Ridge Leadership Computing Facility, Oak Ridge National Laboratory Oak Ridge, TN 37831, USA fgshipman,dillowda,oralhs,fwang2,fullerdj,hilljj,[email protected] Abstract 1 Introduction The Spider system at the Oak Ridge National Labo- The Oak Ridge Leadership Computing Facility ratory’s Leadership Computing Facility (OLCF) is the (OLCF) at Oak Ridge National Laboratory (ORNL) world’s largest scale Lustre parallel file system. Envi- hosts the world’s most powerful supercomputer, sioned as a shared parallel file system capable of de- Jaguar [2, 14, 7], a 2.332 Petaflop/s Cray XT5 [5]. livering both the bandwidth and capacity requirements OLCF also hosts an array of other computational re- of the OLCF’s diverse computational environment, the sources such as a 263 Teraflop/s Cray XT4 [1], visual- project had a number of ambitious goals. To support the ization, and application development platforms. Each of workloads of the OLCF’s diverse computational plat- these systems requires a reliable, high-performance and forms, the aggregate performance and storage capacity scalable file system for data storage. of Spider exceed that of our previously deployed systems Parallel file systems on leadership-class systems have by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, traditionally been tightly coupled to single simulation respectively. Furthermore, Spider supports over 26,000 platforms. This approach had resulted in the deploy- clients concurrently accessing the file system, which ex- ment of a dedicated file system for each computational ceeds our previously deployed systems by nearly 4x. In platform at the OLCF, creating islands of data. These addition to these scalability challenges, moving to a dedicated file systems impacted user productivity due to center-wide shared file system required dramatically im- costly data transfers and added substantially to the total proved resiliency and fault-tolerance mechanisms. This system deployment cost. paper details our efforts in designing, deploying, and Recognizing the above problems, OLCF initiated the operating Spider. Through a phased approach of re- Spider project to deploy a center-wide parallel file sys- search and development, prototyping, deployment, and tem capable of serving the needs of its computational transition to operations, this work has resulted in a num- resources. The project had a number of ambitious goals: ber of insights into large-scale parallel file system ar- chitectures, from both the design and the operational 1. To provide a single shared storage pool for all com- perspectives. We present in this paper our solutions to putational resources. issues such as network congestion, performance base- lining and evaluation, file system journaling overheads, 2. To meet the aggregate performance and scalability and high availability in a system with tens of thousands requirements of all OLCF platforms. of components. We also discuss areas of continued chal- lenges, such as stressed metadata performance and the 3. To provide resilience against system failures inter- need for file system quality of service alongside with our nal to the storage system as well as failures of any efforts to address them. Finally, operational aspects of computational resources. managing a system of this scale are discussed along with real-world data and observations. 4. To allow growth of the storage pool independent of the computational platforms. 1 The OLCF began a feasibility evaluation in 2006, power supply (UPS). Figure 1 illustrates the internal initially focusing on developing prototype systems and architecture of a DDN S2A9900 couplet and Figure 2 integrating these systems within the OLCF. While the shows the overall Spider architecture. OLCF was the primary driver of this initiative, partner- ships with Cray, Sun Microsystems, and DataDirect Net- Disk Controller 1 Disk Controller 2 works (DDN) were developed to achieve the project’s ambitious technical goals. A Lustre Center of Excel- ... ... Channel A 1A 2A 14A 15A 16A 28A Channel A lence (LCE) at ORNL was established as a result of our ... ... 1B 2B 14B 15B 16B 28B partnership with Sun. Channel B Channel B This paper presents our efforts in pushing Spider to- ward its pre-designated goals, and the remainder of it is ... ... Channel P 1P 2P 14P 15P 16P 28P Channel P organized as follows: Section 2 provides an overview of ... ... the Spider architecture. Key challenges and their reso- Channel S 1S 2S 14S 15S 16S 28S Channel S ... ... lutions are described in Section 3. Section 4 describes Tier1 Tier 2 Tier 14 Tier 15 Tier 16 Tier 28 our approaches and challenges in terms of day-to-day operations since Spider was transitioned to full produc- Figure 1: Internal architecture tion in June 2009. Finally, conclusions are discussed in of a S2A9900 couplet. Section 5. SION IB Network 2 Architecture 192 4x DDR IB 2.1 Spider Connections 192 Spider OSS Servers Spider is a Lustre-based [21, 25] center-wide file sys- 192 4x DDR IB tem replacing multiple file systems within the OLCF. It Connections provides centralized access to Petascale data sets from all OLCF platforms, eliminating islands of data. 96 DDN 9900 Couplets Unlike many existing storage systems based on lo- cal high-performance RAID sets for each computation platform, Spider is a large-scale shared storage cluster. 48 DDN S2A9900 [6] controller couplets provide stor- age which in aggregate delivers over 240 GB/s of band- width and over 10 Petabytes of RAID 6 formatted ca- pacity from 13,440 1 Terabyte SATA drives. 7 - 8+2 Tiers The DDN S2A9900 couplet is composed of two sin- Per OSS glets (independent RAID controllers). Coherency is loosely maintained over a dedicated Serial Attached Figure 2: Overall Spider archi- Storage (SAS) link between the controllers. This is suf- tecture. ficient for insuring consistency for a system with write- back cache disabled. An XScale processor [9] man- The DDN storage is accessed through 192 Dell dual- ages the system but is not in the direct data path. In socket quad-core Lustre OSS (object storage servers) our configuration, host-side interfaces in each singlet providing over 14 Teraflop/s in performance and 3 Ter- are populated with two dual-port 4x Double Data Rate abytes of memory in aggregate. Each OSS can provide (DDR) InfiniBand (IB) Host Channel Adapters (HCAs). in excess of 1.25 GB/s of file system level performance. The back-end disks are connected via ten SAS links to Metadata is stored on 2 LSI Engino 7900s (XBB2) [11] each singlet. For Spider’s SATA based system, these and is served by 3 Dell quad-socket quad-core systems. SAS links connect to expander modules within each disk Each DDN S2A9900 couplet is configured with 28 shelf. The expanders then connect to SAS-to-SATA RAID 6 8+2 tiers. Four OSSs provide access to these 28 adapters on each drive. All components have redun- tiers (7 each). OLCF compute platforms are configured dant paths. Each singlet and disk tray has dual power- with Lustre routers to access Spider’s OSSs. supplies, one of which is protected by an uninterrupted All DDN controllers, OSSs, and Lustre routers are 2 configured in partnered failover pairs to ensure filesys- Aggregation 2 tem availability in the event of any component failure. 60 DDR 5 DDR IBV Connectivity between Spider and OLCF platforms is Lens/Everest Smoky provided by our large-scale 4x DDR IB network, dubbed 32 DDR the Scalable I/O network (SION). Core 1 Core 2 64 DDR 64 DDR Moving toward a centralized file system requires in- IBV IBV IBV creased redundancy and fault tolerance. Spider is de- Aggregation 1 signed to eliminate single points of failure and thereby maximize availability. By using failover pairs, mul- 24 DDR 24 DDR tiple networking paths, and the resiliency features of Jaguar XT4 the Lustre file system, Spider provides a reliable high- segment performance centralized storage solution greatly en- hancing our capability to deliver scientific insight. 96 DDR 96 DDR Jaguar XT5 On Jaguar XT5, 192 Cray Service I/O (SIO) nodes segment are configured as Lustre routers. Each SIO node has 8 AMD Opteron cores and 8 GBytes of RAM and are con- 48 Leaf Switches 96 DDR 96 DDR nected to Cray’s SeaStar2+ [3, 4] network. Each SIO IBV is connected to SION using Mellanox ConnectX [22] host channel adapters (HCAs). These Lustre routers al- 192 DDR low compute nodes within the SeaStar2+ torus to access the Spider file system at speeds in excess of 1.25 GB/s per compute node. The Jaguar XT4 partition is simi- Spider larly configured with 48 Cray SIO nodes acting as Lus- tre routers. In aggregate, the XT5 partition has over 240 Figure 3: Scalable I/O Network (SION) architecture. GB/s of storage throughput while XT4 has over 60 GB/s. Other OLCF platforms are similarly configured. fourth provides connectivity to all other OLCF via the 2.2 Scalable I/O Network (SION) aggregation switch. The Spider is connected to the core switches via 48 24-port IB switches allowing storage In order to provide true integration among all systems to be accessed directly from SION. Additional switches hosted by OLCF, a high-performance, large-scale IB [8] provide connectivity for the remaining OLCF platforms. network, dubbed SION, has been deployed. SION pro- SION is routed using OpenSM [24] with its min-hop vides advanced capabilities including resource sharing routing algorithm with an added list of IB GUIDs that and communication between the two segments of Jaguar have their forwarding entries placed in order before the and Spider, and real time visualization, streaming data general population.
Recommended publications
  • Titan: a New Leadership Computer for Science
    Titan: A New Leadership Computer for Science Presented to: DOE Advanced Scientific Computing Advisory Committee November 1, 2011 Arthur S. Bland OLCF Project Director Office of Science Statement of Mission Need • Increase the computational resources of the Leadership Computing Facilities by 20-40 petaflops • INCITE program is oversubscribed • Programmatic requirements for leadership computing continue to grow • Needed to avoid an unacceptable gap between the needs of the science programs and the available resources • Approved: Raymond Orbach January 9, 2009 • The OLCF-3 project comes out of this requirement 2 ASCAC – Nov. 1, 2011 Arthur Bland INCITE is 2.5 to 3.5 times oversubscribed 2007 2008 2009 2010 2011 2012 3 ASCAC – Nov. 1, 2011 Arthur Bland What is OLCF-3 • The next phase of the Leadership Computing Facility program at ORNL • An upgrade of Jaguar from 2.3 Petaflops (peak) today to between 10 and 20 PF by the end of 2012 with operations in 2013 • Built with Cray’s newest XK6 compute blades • When completed, the new system will be called Titan 4 ASCAC – Nov. 1, 2011 Arthur Bland Cray XK6 Compute Node XK6 Compute Node Characteristics AMD Opteron 6200 “Interlagos” 16 core processor @ 2.2GHz Tesla M2090 “Fermi” @ 665 GF with 6GB GDDR5 memory Host Memory 32GB 1600 MHz DDR3 Gemini High Speed Interconnect Upgradeable to NVIDIA’s next generation “Kepler” processor in 2012 Four compute nodes per XK6 blade. 24 blades per rack 5 ASCAC – Nov. 1, 2011 Arthur Bland ORNL’s “Titan” System • Upgrade of existing Jaguar Cray XT5 • Cray Linux Environment
    [Show full text]
  • Musings RIK FARROWOPINION
    Musings RIK FARROWOPINION Rik is the editor of ;login:. While preparing this issue of ;login:, I found myself falling down a rabbit hole, like [email protected] Alice in Wonderland . And when I hit bottom, all I could do was look around and puzzle about what I discovered there . My adventures started with a casual com- ment, made by an ex-Cray Research employee, about the design of current super- computers . He told me that today’s supercomputers cannot perform some of the tasks that they are designed for, and used weather forecasting as his example . I was stunned . Could this be true? Or was I just being dragged down some fictional rabbit hole? I decided to learn more about supercomputer history . Supercomputers It is humbling to learn about the early history of computer design . Things we take for granted, such as pipelining instructions and vector processing, were impor- tant inventions in the 1970s . The first supercomputers were built from discrete components—that is, transistors soldered to circuit boards—and had clock speeds in the tens of nanoseconds . To put that in real terms, the Control Data Corpora- tion’s (CDC) 7600 had a clock cycle of 27 .5 ns, or in today’s terms, 36 4. MHz . This was CDC’s second supercomputer (the 6600 was first), but included instruction pipelining, an invention of Seymour Cray . The CDC 7600 peaked at 36 MFLOPS, but generally got 10 MFLOPS with carefully tuned code . The other cool thing about the CDC 7600 was that it broke down at least once a day .
    [Show full text]
  • Jaguar Supercomputer
    Jaguar Supercomputer Jake Baskin '10, Jānis Lībeks '10 Jan 27, 2010 What is it? Currently the fastest supercomputer in the world, at up to 2.33 PFLOPS, located at Oak Ridge National Laboratory (ORNL). Leader in "petascale scientific supercomputing". Uses Massively parallel simulations. Modeling: Climate Supernovas Volcanoes Cellulose http://www.nccs.gov/wp- content/themes/nightfall/img/jaguarXT5/gallery/jaguar-1.jpg Overview Processor Specifics Network Architecture Programming Models NCCS networking Spider file system Scalability The guts 84 XT4 and 200 XT5 cabinets XT5 18688 compute nodes 256 service and i/o nodes XT4 7832 compute nodes 116 service and i/o nodes (XT5) Compute Nodes 2 Opteron 2435 Istanbul (6 core) processors per node 64K L1 instruction cache 65K L1 data cache per core 512KB L2 cache per core 6MB L3 cache per processor (shared) 8GB of DDR2-800 RAM directly attached to each processor by integrated memory controller. http://www.cray.com/Assets/PDF/products/xt/CrayXT5Brochure.pdf How are they organized? 3-D Torus topology XT5 and XT4 segments are connected by an InfiniBand DDR network 889 GB/sec bisectional bandwidth http://www.cray.com/Assets/PDF/products/xt/CrayXT5Brochure.pdf Programming Models Jaguar supports these programming models: MPI (Message Passing Interface) OpenMP (Open Multi Processing) SHMEM (SHared MEMory access library) PGAS (Partitioned global address space) NCCS networking Jaguar usually performs computations on large datasets. These datasets have to be transferred to ORNL. Jaguar is connected to ESnet (Energy Sciences Network, scientific institutions) and Internet2 (higher education institutions). ORNL owns its own optical network that allows 10Gb/s to various locations around the US.
    [Show full text]
  • Efficient Object Storage Journaling in a Distributed Parallel File System Presented by Sarp Oral
    Efficient Object Storage Journaling in a Distributed Parallel File System Presented by Sarp Oral Sarp Oral, Feiyi Wang, David Dillow, Galen Shipman, Ross Miller, and Oleg Drokin FAST’10, Feb 25, 2010 A Demanding Computational Environment Jaguar XT5 18,688 224,256 300+ TB 2.3 PFlops Nodes Cores memory Jaguar XT4 7,832 31,328 63 TB 263 TFlops Nodes Cores memory Frost (SGI Ice) 128 Node institutional cluster Smoky 80 Node software development cluster Lens 30 Node visualization and analysis cluster 2 FAST’10, Feb 25, 2010 Spider Fastest Lustre file system in the world Demonstrated bandwidth of 240 GB/s on the center wide file system Largest scale Lustre file system in the world Demonstrated stability and concurrent mounts on major OLCF systems • Jaguar XT5 • Jaguar XT4 • Opteron Dev Cluster (Smoky) • Visualization Cluster (Lens) Over 26,000 clients mounting the file system and performing I/O General availability on Jaguar XT5, Lens, Smoky, and GridFTP servers Cutting edge resiliency at scale Demonstrated resiliency features on Jaguar XT5 • DM Multipath • Lustre Router failover 3 FAST’10, Feb 25, 2010 Designed to Support Peak Performance 100.00 ReadBW GB/s WriteBW GB/s 90.00 80.00 70.00 60.00 50.00 Bandwidth GB/s 40.00 30.00 20.00 10.00 0.00 1/1/10 0:00 1/6/10 0:00 1/11/10 0:00 1/16/10 0:00 1/21/10 0:00 1/26/10 0:00 1/31/10 0:00 Timeline (January 2010) Max data rates (hourly) on ½ of available storage controllers 4 FAST’10, Feb 25, 2010 Motivations for a Center Wide File System • Building dedicated file systems for platforms does not scale
    [Show full text]
  • IBM US Nuke-Lab Beast 'Sequoia' Is Top of the Flops (Petaflops, That Is) | Insidehpc.Com
    Advertisement insideHPC Skip to content Latest News HPC Hardware Software Tools Events inside-BigData Search Rock Stars of HPC Videos inside-Startups HPC Jobs IBM US Nuke-lab Beast ‘Sequoia’ is Top of the Flops (Petaflops, that is) 06.18.2012 Mi piace By Timothy Prickett Morgan • Get more from this author For the second time in the past two years, a new supercomputer has taken the top ranking in the Top 500 list of supercomputers – and it does not use a hybrid CPU-GPU architecture. But the question everyone will be asking at the International Super Computing conference in Hamburg, Germany today is whether this is the last hurrah for such monolithic parallel machines and whether the move toward hybrid machines where GPUs or other kinds of coprocessors do most of the work is inevitable. No one can predict the future, of course, even if they happen to be Lawrence Livermore National Laboratory (LLNL) and even if they happen to have just fired up IBM’s “Sequoia” BlueGene/Q beast, which has been put through the Linpack benchmark paces, delivering 16.32 petaflops of sustained performance running across the 1.57 million PowerPC cores inside the box. Sequoia has a peak theoretical performance of 20.1 petaflops, so 81.1 per cent of the possible clocks in the box that could do work running Linpack did so when the benchmark test was done. LLNL was where the original BlueGene/L super was commercialized, so that particular Department of Energy nuke lab knows how to tune the massively parallel Power machine better than anyone on the planet, meaning the efficiency is not a surprise.
    [Show full text]
  • Supercomputers – Prestige Objects Or Crucial Tools for Science and Industry?
    Supercomputers – Prestige Objects or Crucial Tools for Science and Industry? Hans W. Meuer a 1, Horst Gietl b 2 a University of Mannheim & Prometeus GmbH, 68131 Mannheim, Germany; b Prometeus GmbH, 81245 Munich, Germany; This paper is the revised and extended version of the Lorraine King Memorial Lecture Hans Werner Meuer was invited by Lord Laird of Artigarvan to give at the House of Lords, London, on April 18, 2012. Keywords: TOP500, High Performance Computing, HPC, Supercomputing, HPC Technology, Supercomputer Market, Supercomputer Architecture, Supercomputer Applications, Supercomputer Technology, Supercomputer Performance, Supercomputer Future. 1 e-mail: [email protected] 2 e-mail: [email protected] 1 Content 1 Introduction ..................................................................................................................................... 3 2 The TOP500 Supercomputer Project ............................................................................................... 3 2.1 The LINPACK Benchmark ......................................................................................................... 4 2.2 TOP500 Authors ...................................................................................................................... 4 2.3 The 39th TOP500 List since 1993 .............................................................................................. 5 2.4 The 39th TOP10 List since 1993 ...............................................................................................
    [Show full text]
  • An Analysis of System Balance and Architectural Trends Based on Top500 Supercomputers
    ORNL/TM-2020/1561 An Analysis of System Balance and Architectural Trends Based on Top500 Supercomputers Hyogi Sim Awais Khan Sudharshan S. Vazhkudai Approved for public release. Distribution is unlimited. August 11, 2020 DOCUMENT AVAILABILITY Reports produced after January 1, 1996, are generally available free via US Department of Energy (DOE) SciTech Connect. Website: www.osti.gov/ Reports produced before January 1, 1996, may be purchased by members of the public from the following source: National Technical Information Service 5285 Port Royal Road Springfield, VA 22161 Telephone: 703-605-6000 (1-800-553-6847) TDD: 703-487-4639 Fax: 703-605-6900 E-mail: [email protected] Website: http://classic.ntis.gov/ Reports are available to DOE employees, DOE contractors, Energy Technology Data Ex- change representatives, and International Nuclear Information System representatives from the following source: Office of Scientific and Technical Information PO Box 62 Oak Ridge, TN 37831 Telephone: 865-576-8401 Fax: 865-576-5728 E-mail: [email protected] Website: http://www.osti.gov/contact.html This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal lia- bility or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or rep- resents that its use would not infringe privately owned rights. Refer- ence herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not nec- essarily constitute or imply its endorsement, recommendation, or fa- voring by the United States Government or any agency thereof.
    [Show full text]
  • High Performance Computing (HPC)
    Vol. 20 | No. 2 | 2013 Globe at a Glance | Pointers | Spinouts Editor’s column Managing Editor “End of the road for Roadrunner” Los Alamos important. According to the developers of the National Laboratory, March 29, 2013.a Graph500 benchmarks, these data-intensive applications are “ill-suited for platforms designed for Five years after becoming the fastest 3D physics simulations,” the very purpose for which supercomputer in the world, Roadrunner was Roadrunner was designed. New supercomputer decommissioned by the Los Alamos National Lab architectures and software systems must be on March 31, 2013. It was the rst supercomputer designed to support such applications. to reach the petaop barrier—one million billion calculations per second. In addition, Roadrunner’s These questions of power eciency and unique design combined two dierent kinds changing computational models are at the core of processors, making it the rst “hybrid” of moving supercomputers toward exascale supercomputer. And it still held the number 22 spot computing, which industry experts estimate will on the TOP500 list when it was turned o. occur sometime between 2020 and 2030. They are also the questions that are addressed in this issue of Essentially, Roadrunner became too power The Next Wave (TNW). inecient for Los Alamos to keep running. As of November 2012, Roadrunner required 2,345 Look for articles on emerging technologies in kilowatts to hit 1.042 petaops or 444 megaops supercomputing centers and the development per watt. In contrast, Oak Ridge National of new supercomputer architectures, as well as a Laboratory’s Titan, which was number one on the brief introduction to quantum computing.
    [Show full text]
  • The Energy Efficiency of the Jaguar Supercomputer
    The Energy Efficiency of the Jaguar Supercomputer Chung-Hsing Hsu and Stephen W. Poole and Don Maxwell Oak Ridge National Laboratory Oak Ridge, TN 37831 fhsuc,spoole,[email protected] December 7, 2012 1 Introduction The growing energy cost has become a major concern for data centers. The metric, Power Usage Effectiveness (PUE), has been very successful in driving the energy efficiency of data centers. PUE is not perfect, however. It often does not account for the energy loss due to power distribution and cooling inside the IT equipment. In HPC space this is particularly problematic as the newer designs tend to move part of power and cooling subsystems into the IT hardware. This paper presents a preliminary analysis of the energy efficiency of the Jaguar supercomputer by taking the above issue into account. It also identifies the technical challenges associated with the analysis. 2 The Jaguar Supercomputer The HPC system we study in this paper is the Jaguar supercomputer [1]. This system consists of 200 Cray XT5 cabinets in a configuration of eight rows by twenty-five columns with a footprint of roughly the size of a basketball court. Each cabinet contains three backplanes, a blower for air cooling, a power supply unit, a control system (CRMS), and twenty-four blades. Jaguar has two types of blades: compute blades and service blades. A compute blade consists of four nodes and a mezzanine card. A node has two six-core 2.6 GHz AMD Opteron 2435 processors. Each processor is connected to two 4 GB DDR2-800 memory modules. The mezzanine card supports the Cray SeaStar2+ three- dimensional torus interconnect.
    [Show full text]
  • What Is a Core Hour on Titan?
    What is a Core Hour on Titan? or 2013 the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program awarded 1.84 billion coreF hours on Titan, a hybrid-architecture high-performance computing (HPC) system that sports central processing units (CPUs) and graphics processing units (GPUs), which are radically different, in the same machine. So what exactly does it mean to allocate a core hour on Titan? The Oak Ridge Leadership Computing Facility (OLCF) operates Titan based on nodes, or functional units in a message-passing network that can be assigned to work together on a specific task, such as calculating how the Earth’s climate might change over a decade. Each node on Titan runs a single copy of the Cray Linux operating system. Likewise, a node’s cores can work together to run a simulation. Because communication is faster within the components of a single node than between nodes, data is structured so calculations The OLCF is home to Titan, the world’s most powerful stay within nodes as much as possible. For individual projects running on Titan, different nodes can be supercomputer for open assigned to run concurrently. science with a theoretical peak performance exceeding Whereas nodes are the functional units of supercomputers, cores are used to explain how users get 20 petaflops (quadrillion “charged” for time on Titan. For each hour a node runs calculations on Titan, the OLCF assesses users calculations per second). an accounting charge of 30 core hours. The number 30 is the sum of the 16 x86 CPU cores on each That kind of computational capability—almost node’s CPU microprocessor and the 14 streaming multiprocessors on each node’s GPU processor.
    [Show full text]
  • Scale Has Become Daily Business at the NCCS. a Reality
    Contents 2 Introduction 6 Preparing for Science at the Petascale 7 Combustion 9 Astrophysics 11 Fusion 13 Climate 16 Biology 18 Scaling/Scientific Computing Liaisons 22 Gordon Bell and HPC Challenge Awards 24 Letters from NCCS Staff 27 Eyeing the Leap to Exascale 29 Engaging New Users 31 Appendices James J. Hack, Director National Center for Computational Sciences Oak Ridge National Laboratory P.O. Box 2008, Oak Ridge, TN 37831-6008 TEL 865-241-6536 FAX 865-241-2850 EMAIL [email protected] URL www.nccs.gov The research and activities described in this report were performed using resources of the National Center for Computational Sciences at Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC0500OR22725. NCCS’s Jaguar, a 1.64-petaflop Cray XT5. Beyond Linpack: ORNL Super- computer Delivers Science as Well as Speed ‘Jaguar’ runs swiftest scientific simulation ever, gives insight into superconductors owerful. Versatile. Fast. Smart. For unclassified research eighth, respectively, on the TOP500 list of the world’s swiftest the fastest cat in the high-performance computing supercomputers. DOE’s “Roadrunner” IBM supercomputer at P(HPC) jungle is “Jaguar,” a Cray XT system at the National Center for Computational Sciences (NCCS) located on the U.S. Department of En- ergy’s (DOE’s) Oak Ridge National Laboratory (ORNL) campus. Sci- ence’s quickest supercomputer has a theoretical peak calculating speed of 1.64 petaflops (quadrillion calculations per second) and more memory than any other HPC system. With approximate- ly 182,000 AMD Opteron processing cores working together, Jaguar can do in 1 day what it would take every man, woman, and child on Earth 650 years to do—assuming each could complete one calculation per second.
    [Show full text]
  • Chinese Supercomputer Emerges at Head of Latest Top500 List
    From SIAM News, Volume 43, Number 10, December 2010 Chinese Supercomputer Emerges at Head of Latest Top500 List From time to time, announcement of the top 500 supercomputers in the world, a highlight of the annual Supercomputing conference, contains an element of drama. Release of the latest list at SC 2010 (New Orleans, November 13–19) is such an instance. Of the top five computers on the list, two are from China, two are from the U.S., and one is from Japan. Number 1, as predicted with near certainty two weeks in advance of the official announcement in an article in The New York Times, is the Tianhe-1A, located at the National Supercomputing Center in Tianjin, China. With sustained performance of 2.57 petaflop/s (46% faster than the Jaguar at Oak Ridge National Laboratory, the previous number 1 system), the Tianhe-1A has captured the attention of those who use, analyze, create, and fund supercomputers. Making the prediction was Jack Dongarra, director of the Innovative Computing Laboratory at the University of Tennessee. The Tianhe “blows away the existing No. 1 machine,” he told the Times reporter. “It is unlikely that we will see a system that is faster soon.” At the beginning of November, with the official release of the list still two weeks away, Dongarra, who also has an appointment at Oak Ridge National Lab and the University of Manchester and has been the driving force behind the twice yearly Top500 list since its 1993 founding, spoke to SIAM News by phone about the Chinese supercomputer and its implications.
    [Show full text]