High Performance Computing (HPC)

Total Page:16

File Type:pdf, Size:1020Kb

High Performance Computing (HPC) Vol. 20 | No. 2 | 2013 Globe at a Glance | Pointers | Spinouts Editor’s column Managing Editor “End of the road for Roadrunner” Los Alamos important. According to the developers of the National Laboratory, March 29, 2013.a Graph500 benchmarks, these data-intensive applications are “ill-suited for platforms designed for Five years after becoming the fastest 3D physics simulations,” the very purpose for which supercomputer in the world, Roadrunner was Roadrunner was designed. New supercomputer decommissioned by the Los Alamos National Lab architectures and software systems must be on March 31, 2013. It was the rst supercomputer designed to support such applications. to reach the petaop barrier—one million billion calculations per second. In addition, Roadrunner’s These questions of power eciency and unique design combined two dierent kinds changing computational models are at the core of processors, making it the rst “hybrid” of moving supercomputers toward exascale supercomputer. And it still held the number 22 spot computing, which industry experts estimate will on the TOP500 list when it was turned o. occur sometime between 2020 and 2030. They are also the questions that are addressed in this issue of Essentially, Roadrunner became too power The Next Wave (TNW). inecient for Los Alamos to keep running. As of November 2012, Roadrunner required 2,345 Look for articles on emerging technologies in kilowatts to hit 1.042 petaops or 444 megaops supercomputing centers and the development per watt. In contrast, Oak Ridge National of new supercomputer architectures, as well as a Laboratory’s Titan, which was number one on the brief introduction to quantum computing. While November 2012 TOP500 list, was 18 times faster yet this column takes the reader to the recent past ve times more ecient. of supercomputing, the remainder of the issue will propel you “beyond digital” to the future of In addition, data-intensive applications for advanced computing systems. supercomputers are becoming increasingly Managing Editor, The Next Wave a. Press release, Los Alamos National Laboratory. 29 March 2013. Available at: http://www.lanl.gov/newsroom/news- releases/2013/March/03.29-end-of-roadrunner.php. Contents 3 Dening the future with modeling, simulation, and emulation B P N W 7 Predicting the performance of extreme- scale supercomputer networks S P, X Y, M L 20 Doing more with less: Cooling computers with oil pays o D P 30 Energy-ecient superconducting computing coming up to speed M A. M 33 Beyond digital: A brief introduction to quantum computing P L 38 GLOBE AT A GLANCE 40 POINTERS 42 SPINOUTS e Next Wave is published to disseminate technical advancements and research activities in telecommunications and information technologies. Mentions of company names or commercial products do not imply endorsement by the US Government. is publication is available online at http://www.nsa.gov/research/tnw/tnw202/ article1.shtml. For more information, please contact us at [email protected]. Vol. 20 | No. 2 | 2013 Article title goes here unless article begins on this page. If article begins on this page, override rules and text using Ctrl + Shift. / 2 Dening the future with modeling, simulation, and emulation Benjamin Payne, PhD, and Noel Wheeler nformation has become a key currency and driving force in modern society. Keeping Iour leaders apprised of the most current information enables them to make the decisions essential to keep our country safe. The National Security Agency’s reputation as the “nation’s guardian” relies heavily on the ow of information, all of which is handled by an awe-inspiring array of computers and networks. Some of the problems encountered by NSA require a special breed of machines known as high-performance computers or supercomputers. Employing these powerful machines comes with a considerable price tag to the US government. When acquiring supercomputers, decision makers need to have a degree of condence that a new computer will be suitable, even in cases when the machine does not yet exist. The Next Wave | Vol. 20 No. 2 | 2013 | 3 Dening the future with modeling, simulation, and emulation What is a supercomputer? Processor Processor 200+ gigaops Interconnect 200+ gigaops Although supercomputers are unique, custom-built 0.1 byte per 0.1 byte per second per op second per op machines, they fundamen- tally share the design of the computers you use at home—a processor (i.e., 1 byte per 0.01 byte per 1 byte per second per op second per op second per op a central processing unit or CPU), small and fast memory (i.e., random- access memory or RAM), storage (i.e., hard disk Local Memory Local Memory drive/CD/DVD), and a Capacity=0.1 byte per op Capacity=0.1 byte per op network to communicate Disk Storage with other computers. A Capacity=10 bytes per op typical high-performance computing (HPC) system FIGURE 1. A high-performance computer is like a personal computer on a much grander scale— could be considered a per- it has tens of thousands of processors, terabytes of memory, and petabytes of storage. sonal computer on a much grander scale, with tens of thousands of proces- is substantial value to the construction of a model. sors, terabytes (i.e., trillions of bytes) of memory, and Architects, engineers, and scientists have a long his- petabytes (i.e., quadrillions of bytes) of storage (see tory of building models to study complex objects, such gure 1). High-performance computers can readily ll as buildings, bridges, and aircras. a large room, if not a whole building, have customized A new team—the Modeling, Simulation, and Emu- cooling infrastructure, use enough electricity to power lation (MSE) team—within the Laboratory of Physi- a small town, and take an act of Congress to purchase. cal Science’s Advanced Computing Systems Research Such an investment is not made without a great deal of Program [1] has been assembled to address this gap study and thought. between classied soware, which cannot be distrib- uted to vendors, and the vendors’ hardware systems, Simulating a supercomputer which have not been purchased by NSA. As an addi- tional twist, the proposed hardware may be built from Although HPC technology is not unique to NSA, the prototype components such as the hybrid memory specialized problems faced by the Agency can neces- cube (HMC; see gure 2), a three dimensional stacked sitate unique customizations. Because NSA’s applica- memory device designed by a consortium of industry tions and soware are oen classied, they cannot be leaders and researchers [2]. e core objectives of the shared with the architects and engineers developing MSE team include exploration of system architectures, supercomputers. At the same time, an investment of analysis of emerging technologies, and analysis of this magnitude requires condence that a proposed optimization techniques. system will oer the performance sought. Owners of HPC systems desire a computer that is Currently, benchmarks, simplied unclassied innitely fast, has innite memory, takes up no space, soware that exercises important attributes of a com- and requires no energy. None of these attributes are puter system, are developed and used to evaluate the truly realizable, and when considering a practical HPC performance of potential computing system hardware. system, trade-os must be considered. When analyz- However, these benchmarks may not paint the com- ing a prospective HPC system, four primary metrics plete picture. To better understand this problem, there are customarily considered: nancial cost, system 4 FEATURE resilience, time-to-solution, and energy eciency. ese metrics are interdependent. For example, in- creasing the speed of an HPC system will increase the amount of power it consumes and ultimately increase the cost necessary to operate it. In order to measure these metrics, one could build the system and test it. However, this would be extremely expensive and dif- cult to optimize. A model simulating the computer can be developed in far less time, and design param- eters can be adjusted in soware to achieve the desired balance of power, performance, reliability, and cost. Any simulation or model of a computer should ad- dress the metrics listed above. If any are not addressed, then the model could yield incomplete results because optimizing for fewer than all relevant variables poten- FIGURE 2. NSA collaborated with the University of Maryland tially leads to non-global extrema. Many scalar bench- and Micron to develop a simulation tool for Micron's Hybrid marks, for example the TOP500 and the Graph500, Memory Cube that is helping to advance supercomputing focus exclusively on one characteristic, like time-to- applications. Micron now is sampling the three-dimensional package that combines logic and memory functions onto a solution, to the neglect of the other parameters of single chip. interest. e MSE team is collaborating to evangelize a more balanced approach to multiple facets of HPC system characterization, assuring an optimal solution take a holistic approach, targeting the network, CPU, to the Agency’s needs. memory hierarchy, and accelerators (e.g., a graphics e use of benchmarking soware allows for a processing unit or eld-programmable gate array). more comprehensive evaluation of a proposed com- Multiple levels of detail for a simulation are required puter architecture’s performance. is enables HPC to accomplish this. A simulation may be compute- system architects to better target their designs to serve cycle or functionally accurate; it may range from an NSA’s needs. Simply stated, NSA has to work within abstract model to a hardware simulation including budgetary and power (i.e., electricity) constraints, device physics. and it is vital to maximize the return on investment of money and time. Simulation technology While this description is somewhat generic to all To accomplish the objective of enabling HPC system HPC system purchasers, NSA is willing to build spe- simulation within NSA, the MSE group carried out a cial purpose hardware devices and to employ specially survey of existing simulators from academia, industry, developed programming languages if a cost-benet and national labs.
Recommended publications
  • NVIDIA Tesla Personal Supercomputer, Please Visit
    NVIDIA TESLA PERSONAL SUPERCOMPUTER TESLA DATASHEET Get your own supercomputer. Experience cluster level computing performance—up to 250 times faster than standard PCs and workstations—right at your desk. The NVIDIA® Tesla™ Personal Supercomputer AccessiBLE to Everyone TESLA C1060 COMPUTING ™ PROCESSORS ARE THE CORE is based on the revolutionary NVIDIA CUDA Available from OEMs and resellers parallel computing architecture and powered OF THE TESLA PERSONAL worldwide, the Tesla Personal Supercomputer SUPERCOMPUTER by up to 960 parallel processing cores. operates quietly and plugs into a standard power strip so you can take advantage YOUR OWN SUPERCOMPUTER of cluster level performance anytime Get nearly 4 teraflops of compute capability you want, right from your desk. and the ability to perform computations 250 times faster than a multi-CPU core PC or workstation. NVIDIA CUDA UnlocKS THE POWER OF GPU parallel COMPUTING The CUDA™ parallel computing architecture enables developers to utilize C programming with NVIDIA GPUs to run the most complex computationally-intensive applications. CUDA is easy to learn and has become widely adopted by thousands of application developers worldwide to accelerate the most performance demanding applications. TESLA PERSONAL SUPERCOMPUTER | DATASHEET | MAR09 | FINAL FEATURES AND BENEFITS Your own Supercomputer Dedicated computing resource for every computational researcher and technical professional. Cluster Performance The performance of a cluster in a desktop system. Four Tesla computing on your DesKtop processors deliver nearly 4 teraflops of performance. DESIGNED for OFFICE USE Plugs into a standard office power socket and quiet enough for use at your desk. Massively Parallel Many Core 240 parallel processor cores per GPU that can execute thousands of GPU Architecture concurrent threads.
    [Show full text]
  • "Computers" Abacus—The First Calculator
    Component 4: Introduction to Information and Computer Science Unit 1: Basic Computing Concepts, Including History Lecture 4 BMI540/640 Week 1 This material was developed by Oregon Health & Science University, funded by the Department of Health and Human Services, Office of the National Coordinator for Health Information Technology under Award Number IU24OC000015. The First "Computers" • The word "computer" was first recorded in 1613 • Referred to a person who performed calculations • Evidence of counting is traced to at least 35,000 BC Ishango Bone Tally Stick: Science Museum of Brussels Component 4/Unit 1-4 Health IT Workforce Curriculum 2 Version 2.0/Spring 2011 Abacus—The First Calculator • Invented by Babylonians in 2400 BC — many subsequent versions • Used for counting before there were written numbers • Still used today The Chinese Lee Abacus http://www.ee.ryerson.ca/~elf/abacus/ Component 4/Unit 1-4 Health IT Workforce Curriculum 3 Version 2.0/Spring 2011 1 Slide Rules John Napier William Oughtred • By the Middle Ages, number systems were developed • John Napier discovered/developed logarithms at the turn of the 17 th century • William Oughtred used logarithms to invent the slide rude in 1621 in England • Used for multiplication, division, logarithms, roots, trigonometric functions • Used until early 70s when electronic calculators became available Component 4/Unit 1-4 Health IT Workforce Curriculum 4 Version 2.0/Spring 2011 Mechanical Computers • Use mechanical parts to automate calculations • Limited operations • First one was the ancient Antikythera computer from 150 BC Used gears to calculate position of sun and moon Fragment of Antikythera mechanism Component 4/Unit 1-4 Health IT Workforce Curriculum 5 Version 2.0/Spring 2011 Leonardo da Vinci 1452-1519, Italy Leonardo da Vinci • Two notebooks discovered in 1967 showed drawings for a mechanical calculator • A replica was built soon after Leonardo da Vinci's notes and the replica The Controversial Replica of Leonardo da Vinci's Adding Machine .
    [Show full text]
  • Performance Analysis on Energy Efficient High
    Performance Analysis on Energy Efficient High-Performance Architectures Roman Iakymchuk, François Trahay To cite this version: Roman Iakymchuk, François Trahay. Performance Analysis on Energy Efficient High-Performance Architectures. CC’13 : International Conference on Cluster Computing, Jun 2013, Lviv, Ukraine. hal-00865845 HAL Id: hal-00865845 https://hal.archives-ouvertes.fr/hal-00865845 Submitted on 25 Sep 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Performance Analysis on Energy Ecient High-Performance Architectures Roman Iakymchuk1 and François Trahay1 1Institut Mines-Télécom Télécom SudParis 9 Rue Charles Fourier 91000 Évry France {roman.iakymchuk, francois.trahay}@telecom-sudparis.eu Abstract. With the shift in high-performance computing (HPC) towards energy ecient hardware architectures such as accelerators (NVIDIA GPUs) and embedded systems (ARM processors), arose the need to adapt existing perfor- mance analysis tools to these new systems. We present EZTrace a performance analysis framework for parallel applications. EZTrace relies on several core components, in particular on a mechanism for instrumenting func- tions, a lightweight tool for recording events, and a generic interface for writing traces. To support EZTrace on energy ecient HPC systems, we developed a CUDA module and ported EZTrace to ARM processors.
    [Show full text]
  • Computer Organization and Architecture Designing for Performance Ninth Edition
    COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION William Stallings Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montréal Toronto Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo Editorial Director: Marcia Horton Designer: Bruce Kenselaar Executive Editor: Tracy Dunkelberger Manager, Visual Research: Karen Sanatar Associate Editor: Carole Snyder Manager, Rights and Permissions: Mike Joyce Director of Marketing: Patrice Jones Text Permission Coordinator: Jen Roach Marketing Manager: Yez Alayan Cover Art: Charles Bowman/Robert Harding Marketing Coordinator: Kathryn Ferranti Lead Media Project Manager: Daniel Sandin Marketing Assistant: Emma Snider Full-Service Project Management: Shiny Rajesh/ Director of Production: Vince O’Brien Integra Software Services Pvt. Ltd. Managing Editor: Jeff Holcomb Composition: Integra Software Services Pvt. Ltd. Production Project Manager: Kayla Smith-Tarbox Printer/Binder: Edward Brothers Production Editor: Pat Brown Cover Printer: Lehigh-Phoenix Color/Hagerstown Manufacturing Buyer: Pat Brown Text Font: Times Ten-Roman Creative Director: Jayne Conte Credits: Figure 2.14: reprinted with permission from The Computer Language Company, Inc. Figure 17.10: Buyya, Rajkumar, High-Performance Cluster Computing: Architectures and Systems, Vol I, 1st edition, ©1999. Reprinted and Electronically reproduced by permission of Pearson Education, Inc. Upper Saddle River, New Jersey, Figure 17.11: Reprinted with permission from Ethernet Alliance. Credits and acknowledgments borrowed from other sources and reproduced, with permission, in this textbook appear on the appropriate page within text. Copyright © 2013, 2010, 2006 by Pearson Education, Inc., publishing as Prentice Hall. All rights reserved. Manufactured in the United States of America.
    [Show full text]
  • The Reliability Wall for Exascale Supercomputing Xuejun Yang, Member, IEEE, Zhiyuan Wang, Jingling Xue, Senior Member, IEEE, and Yun Zhou
    . IEEE TRANSACTIONS ON COMPUTERS, VOL. *, NO. *, * * 1 The Reliability Wall for Exascale Supercomputing Xuejun Yang, Member, IEEE, Zhiyuan Wang, Jingling Xue, Senior Member, IEEE, and Yun Zhou Abstract—Reliability is a key challenge to be understood to turn the vision of exascale supercomputing into reality. Inevitably, large-scale supercomputing systems, especially those at the peta/exascale levels, must tolerate failures, by incorporating fault- tolerance mechanisms to improve their reliability and availability. As the benefits of fault-tolerance mechanisms rarely come without associated time and/or capital costs, reliability will limit the scalability of parallel applications. This paper introduces for the first time the concept of “Reliability Wall” to highlight the significance of achieving scalable performance in peta/exascale supercomputing with fault tolerance. We quantify the effects of reliability on scalability, by proposing a reliability speedup, defining quantitatively the reliability wall, giving an existence theorem for the reliability wall, and categorizing a given system according to the time overhead incurred by fault tolerance. We also generalize these results into a general reliability speedup/wall framework by considering not only speedup but also costup. We analyze and extrapolate the existence of the reliability wall using two representative supercomputers, Intrepid and ASCI White, both employing checkpointing for fault tolerance, and have also studied the general reliability wall using Intrepid. These case studies provide insights on how to mitigate reliability-wall effects in system design and through hardware/software optimizations in peta/exascale supercomputing. Index Terms—fault tolerance, exascale, performance metric, reliability speedup, reliability wall, checkpointing. ! 1INTRODUCTION a revolution in computing at a greatly accelerated pace [2].
    [Show full text]
  • Computersacific 2008CIFIC Philosophical Universityuk Articles PHILOSOPHICAL Publishing of Quarterlysouthernltd QUARTERLY California and Blackwell Publishing Ltd
    PAPQ 3 0 9 Operator: Xiaohua Zhou Dispatch: 21.12.07 PE: Roy See Journal Name Manuscript No. Proofreader: Wu Xiuhua No. of Pages: 42 Copy-editor: 1 BlackwellOxford,PAPQP0279-0750©XXXOriginalPACOMPUTERSacific 2008CIFIC Philosophical UniversityUK Articles PHILOSOPHICAL Publishing of QuarterlySouthernLtd QUARTERLY California and Blackwell Publishing Ltd. 2 3 COMPUTERS 4 5 6 BY 7 8 GUALTIERO PICCININI 9 10 Abstract: I offer an explication of the notion of the computer, grounded 11 in the practices of computability theorists and computer scientists. I begin by 12 explaining what distinguishes computers from calculators. Then, I offer a 13 systematic taxonomy of kinds of computer, including hard-wired versus 14 programmable, general-purpose versus special-purpose, analog versus digital, and serial versus parallel, giving explicit criteria for each kind. 15 My account is mechanistic: which class a system belongs in, and which 16 functions are computable by which system, depends on the system’s 17 mechanistic properties. Finally, I briefly illustrate how my account sheds 18 light on some issues in the history and philosophy of computing as well 19 as the philosophy of mind. What exactly is a digital computer? (Searle, 1992, p. 205) 20 21 22 23 In our everyday life, we distinguish between things that compute, such as 24 pocket calculators, and things that don’t, such as bicycles. We also distinguish 25 between different kinds of computing device. Some devices, such as abaci, 26 have parts that need to be moved by hand. They may be called computing 27 aids. Other devices contain internal mechanisms that, once started, produce 28 a result without further external intervention.
    [Show full text]
  • Trends in Electrical Efficiency in Computer Performance
    ASSESSING TRENDS IN THE ELECTRICAL EFFICIENCY OF COMPUTATION OVER TIME Jonathan G. Koomey*, Stephen Berard†, Marla Sanchez††, Henry Wong** * Lawrence Berkeley National Laboratory and Stanford University †Microsoft Corporation ††Lawrence Berkeley National Laboratory **Intel Corporation Contact: [email protected], http://www.koomey.com Final report to Microsoft Corporation and Intel Corporation Submitted to IEEE Annals of the History of Computing: August 5, 2009 Released on the web: August 17, 2009 EXECUTIVE SUMMARY Information technology (IT) has captured the popular imagination, in part because of the tangible benefits IT brings, but also because the underlying technological trends proceed at easily measurable, remarkably predictable, and unusually rapid rates. The number of transistors on a chip has doubled more or less every two years for decades, a trend that is popularly (but often imprecisely) encapsulated as “Moore’s law”. This article explores the relationship between the performance of computers and the electricity needed to deliver that performance. As shown in Figure ES-1, computations per kWh grew about as fast as performance for desktop computers starting in 1981, doubling every 1.5 years, a pace of change in computational efficiency comparable to that from 1946 to the present. Computations per kWh grew even more rapidly during the vacuum tube computing era and during the transition from tubes to transistors but more slowly during the era of discrete transistors. As expected, the transition from tubes to transistors shows a large jump in computations per kWh. In 1985, the physicist Richard Feynman identified a factor of one hundred billion (1011) possible theoretical improvement in the electricity used per computation.
    [Show full text]
  • Computer Performance Evaluation and Benchmarking
    Computer Performance Evaluation and Benchmarking EE 382M Dr. Lizy Kurian John Evolution of Single-Chip Microprocessors 1970’s 1980’s 1990’s 2010s Transistor Count 10K- 100K-1M 1M-100M 100M- 100K 10 B Clock Frequency 0.2- 2-20MHz 20M- 0.1- 2MHz 1GHz 4GHz Instruction/Cycle < 0.1 0.1-0.9 0.9- 2.0 1-100 MIPS/MFLOPS < 0.2 0.2-20 20-2,000 100- 10,000 Hot Chips 2014 (August 2014) AMD KAVERI HOT CHIPS 2014 AMD KAVERI HOTCHIPS 2014 Hotchips 2014 Hotchips 2014 - NVIDIA Power Density in Microprocessors 10000 Sun’s Surface 1000 Rocket Nozzle ) 2 Nuclear Reactor 100 Core 2 8086 10 Hot Plate 8008 Pentium® Power Density (W/cm 8085 4004 386 Processors 286 486 8080 1 1970 1980 1990 2000 2010 Source: Intel Why Performance Evaluation? • For better Processor Designs • For better Code on Existing Designs • For better Compilers • For better OS and Runtimes Design Analysis Lord Kelvin “To measure is to know.” "If you can not measure it, you can not improve it.“ "I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03] Designs evolve based on Analysis • Good designs are impossible without good analysis • Workload Analysis • Processor Analysis Design Analysis Performance Evaluation
    [Show full text]
  • Programs Processing Programmable Calculator
    IAEA-TECDOC-252 PROGRAMS PROCESSING RADIOIMMUNOASSAY PROGRAMMABLE CALCULATOR Off-Line Analysi f Countinso g Data from Standard Unknownd san s A TECHNICAL DOCUMENT ISSUEE TH Y DB INTERNATIONAL ATOMIC ENERGY AGENCY, VIENNA, 1981 PROGRAM DATR SFO A PROCESSIN RADIOIMMUNOASSAN I G Y USIN HP-41E GTH C PROGRAMMABLE CALCULATOR IAEA, VIENNA, 1981 PrinteIAEe Austrin th i A y b d a September 1981 PLEASE BE AWARE THAT ALL OF THE MISSING PAGES IN THIS DOCUMENT WERE ORIGINALLY BLANK The IAEA does not maintain stocks of reports in this series. However, microfiche copies of these reports can be obtained from INIS Microfiche Clearinghouse International Atomic Energy Agency Wagramerstrasse 5 P.O.Bo0 x10 A-1400 Vienna, Austria on prepayment of Austrian Schillings 25.50 or against one IAEA microfiche service coupon to the value of US $2.00. PREFACE The Medical Applications Section of the International Atomic Energy Agenc s developeha y d severae th ln o programe us r fo s Hewlett-Packard HP-41C programmable calculator to facilitate better quality control in radioimmunoassay through improved data processing. The programs described in this document are designed for off-line analysis of counting data from standard and "unknown" specimens, i.e., for analysis of counting data previously recorded by a counter. Two companion documents will follow offering (1) analogous programe on-linus r conjunction fo i se n wit suitabla h y designed counter, and (2) programs for analysis of specimens introduced int successioa o f assano y batches from "quality-control pools" of the substance being measured. Suggestions for improvements of these programs and their documentation should be brought to the attention of: Robert A.
    [Show full text]
  • Hematopoietic and Lymphoid Neoplasm Coding Manual
    Hematopoietic and Lymphoid Neoplasm Coding Manual Effective with Cases Diagnosed 1/1/2010 and Forward Published August 2021 Editors: Jennifer Ruhl, MSHCA, RHIT, CCS, CTR, NCI SEER Margaret (Peggy) Adamo, BS, AAS, RHIT, CTR, NCI SEER Lois Dickie, CTR, NCI SEER Serban Negoita, MD, PhD, CTR, NCI SEER Suggested citation: Ruhl J, Adamo M, Dickie L., Negoita, S. (August 2021). Hematopoietic and Lymphoid Neoplasm Coding Manual. National Cancer Institute, Bethesda, MD, 2021. Hematopoietic and Lymphoid Neoplasm Coding Manual 1 In Appreciation NCI SEER gratefully acknowledges the dedicated work of Drs, Charles Platz and Graca Dores since the inception of the Hematopoietic project. They continue to provide support. We deeply appreciate their willingness to serve as advisors for the rules within this manual. The quality of this Hematopoietic project is directly related to their commitment. NCI SEER would also like to acknowledge the following individuals who provided input on the manual and/or the database. Their contributions are greatly appreciated. • Carolyn Callaghan, CTR (SEER Seattle Registry) • Tiffany Janes, CTR (SEER Seattle Registry) We would also like to give a special thanks to the following individuals at Information Management Services, Inc. (IMS) who provide us with document support and web development. • Suzanne Adams, BS, CTR • Ginger Carter, BA • Sean Brennan, BS • Paul Stephenson, BS • Jacob Tomlinson, BS Hematopoietic and Lymphoid Neoplasm Coding Manual 2 Dedication The Hematopoietic and Lymphoid Neoplasm Coding Manual (Heme manual) and the companion Hematopoietic and Lymphoid Neoplasm Database (Heme DB) are dedicated to the hard-working cancer registrars across the world who meticulously identify, abstract, and code cancer data.
    [Show full text]
  • Lessons Learned in Deploying the World's Largest Scale Lustre File
    Lessons Learned in Deploying the World’s Largest Scale Lustre File System Galen M. Shipman, David A. Dillow, Sarp Oral, Feiyi Wang, Douglas Fuller, Jason Hill, Zhe Zhang Oak Ridge Leadership Computing Facility, Oak Ridge National Laboratory Oak Ridge, TN 37831, USA fgshipman,dillowda,oralhs,fwang2,fullerdj,hilljj,[email protected] Abstract 1 Introduction The Spider system at the Oak Ridge National Labo- The Oak Ridge Leadership Computing Facility ratory’s Leadership Computing Facility (OLCF) is the (OLCF) at Oak Ridge National Laboratory (ORNL) world’s largest scale Lustre parallel file system. Envi- hosts the world’s most powerful supercomputer, sioned as a shared parallel file system capable of de- Jaguar [2, 14, 7], a 2.332 Petaflop/s Cray XT5 [5]. livering both the bandwidth and capacity requirements OLCF also hosts an array of other computational re- of the OLCF’s diverse computational environment, the sources such as a 263 Teraflop/s Cray XT4 [1], visual- project had a number of ambitious goals. To support the ization, and application development platforms. Each of workloads of the OLCF’s diverse computational plat- these systems requires a reliable, high-performance and forms, the aggregate performance and storage capacity scalable file system for data storage. of Spider exceed that of our previously deployed systems Parallel file systems on leadership-class systems have by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, traditionally been tightly coupled to single simulation respectively. Furthermore, Spider supports over 26,000 platforms. This approach had resulted in the deploy- clients concurrently accessing the file system, which ex- ment of a dedicated file system for each computational ceeds our previously deployed systems by nearly 4x.
    [Show full text]
  • LLNL Computation Directorate Annual Report (2014)
    PRODUCTION TEAM LLNL Associate Director for Computation Dona L. Crawford Deputy Associate Directors James Brase, Trish Damkroger, John Grosh, and Michel McCoy Scientific Editors John Westlund and Ming Jiang Art Director Amy Henke Production Editor Deanna Willis Writers Andrea Baron, Rose Hansen, Caryn Meissner, Linda Null, Michelle Rubin, and Deanna Willis Proofreader Rose Hansen Photographer Lee Baker LLNL-TR-668095 3D Designer Prepared by LLNL under Contract DE-AC52-07NA27344. Ryan Chen This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, Print Production manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the Charlie Arteago, Jr., and Monarch Print Copy and Design Solutions United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. CONTENTS Message from the Associate Director . 2 An Award-Winning Organization . 4 CORAL Contract Awarded and Nonrecurring Engineering Begins . 6 Preparing Codes for a Technology Transition . 8 Flux: A Framework for Resource Management .
    [Show full text]