Acceleration Technology for High Performance Computing in China

Total Page:16

File Type:pdf, Size:1020Kb

Acceleration Technology for High Performance Computing in China Real Performance. Real Science. Real Tools. Acceleration Technology for High Performance Computing in China John Gustafson, Ph.D. CTO, High Performance Computing ClearSpeed Technology 1 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Thesis • China faces the challenge of enormous energy demand for its continued growth • Computing in the US now consumes over 10% of the national power grid (and growing)! • China will soon follow this pattern • For HPC applications, ClearSpeed has sophisticated technologies for reducing power use per operation by tenfold • ClearSpeed is partnering with one of the top 3 Chinese computer companies to create a new high-performance computer 2 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com ClearSpeed company background • Fabless semiconductor company based in Bristol and San Jose – CSX600 coprocessors manufactured by IBM – Accelerator boards assembled and tested by Flextronics • Core Products – World’s highest performance, lowest power consumption processors for Double Precision Floating Point (IEEE 754 compliant) – Accelerators for PCI expansion slots in servers and workstations – Work alongside 32 bit or 64 bit x86 industry-standard processors to accelerate compute intensive functions • Market Focus – Acceleration of High Performance Computing (HPC) applications – Universities and National Laboratories, Life Sciences & Financial Services – Embedded applications in consumer and military applications • Competitive Position – Only supplier of custom-designed, HPC-focused acceleration products – Uniquely positioned to exploit growing HPC acceleration need – Substantial Intellectual Property base with over 100 patents granted/pending 3 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Constraints For Processor Development John Shalf and David Bailey (Lawrence Berkeley NL) 2007: • New constraints – Power limits clock rates – Cannot squeeze more performance from ILP (complex cores with Instruction Level Parallelism) either • But Moore’s Law continues! – What to do with all of those transistors if everything else is flat- lining? – Now, #cores per chip doubles every 18 months instead of clock frequency • Power consumption is chief concern for system architects • Power efficiency is the primary concern of consumers of computer systems!! Figure courtesy of Kunle Olukotun, Lance Hammond, Herb Sutter, and Burton Smith 4 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Folding@home with 700,000 PlayStation 3s? • Each PS3 averages 220 watts on this application. • Total power use: 266 megawatts! • Power cost: about $600,000 per day • 2000 barrels of oil per day for a petaflops/s • For that much electric power, our accelerators can get 500 petaflops/s! 5 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com ClearSpeed product overview • CSX600 Processors – World’s highest performance, most energy efficient processor for double precision floating point applications – 96 processing cores – 40 DP GFLOPS peak, >33 GFLOPS DGEMM – 10 watts (typical) • Advance™ PCI-X and PCIe Accelerators – Exploit standard expansion slots for servers, workstations and blade expansion units – >66 GFLOPS DGEMM per accelerator – 25 – 33 watts (typical) • Software – Linux and Microsoft® drivers – ClearSpeed CSXL plug and play acceleration • Accelerates compute intensive calls from Intel MKL and AMD ACML standard libraries – Software Development Kit • Familiar X86 development environment • C compiler with parallel extensions • Complete Visual Profiling and Debugging Tools 6 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com The ClearSpeed Advance accelerator family Advance X620 Advance e620 • The only accelerator family specifically designed for HPC – 80.64 GFLOPS peak, 66 GFLOPS sustained double precision – Industry leading energy efficiency at > 2 GFLOPS per watt – Advance e620 • PCIe x8, standard height: 98 mm (3.9 in), half length: 167 mm (6.5 in) – Advance X620 • PCI-X, standard height: 98 mm (3.9 in), two-thirds length: 203 mm (8.0 in) – Plug & Play acceleration with standard math libraries including Level 3 BLAS and LAPACK – Fully programmable in Cn extended parallel language 7 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Heat leads to bulk • Air cooling hits limits at about 70 watts/liter – PCI standard of 25 watts, size is 0.3 liters ✔ – A 1U server might use 1000 watts, volume is 14 liters ✔ – A 42U standard rack might use 40 kilowatts, 3000 liters ✔ • Exceed 70 watts/liter, and temperatures rise above operational limits 4 inches by 6 inches 0.5 liter in system 35 watts 9 ounces Latest e620 ClearSpeed accelerator 8 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Dissipation volume can exceed actual volume • To find the real volume occupied by a component in liters, divide its wattage by 70 • What may seem like a dense, powerful solution might actually dilute the GFLOPS per liter because of heat generation. 9 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Performance & Power Efficiency: 250-watt budget 32-bit Peak 64-bit Peak Average multiply-add multiply-add Wattage GFLOPS GFLOPS Intel Clovertown (3.6 GHz) 250 86 57 Nvidia Tesla 170(1) 345(1) not supported 1/8th of 32-bit Future Nvidia 64-bit unknown unknown performance(1) 10 FPGA PCI cards Virtex LX160 based 250 430 4.2 Cell BE 210 230 15 Future Cell HPC 220 200 104 7 ClearSpeed e620 Advance™ Boards 231 564 564 Notes - 1) Table uses information given by vendors at International Supercomputing Conference, Dresden, June 2007 2) 25 to 50 Watts is current expansion slot power budget, 250 Watts proposed 10 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com New Design Approach Delivers 1 TFLOP in 1U • 1U standard server • Intel 5365 3.0 GHz – 2-socket, quad core – 96 DP GFLOPS peak – Approx. 650 watts – Approx. 3.5 TFLOPS peak in a 25 kW rack • 1U ClearSpeed Accelerated TeraScale Server (CATS) – 24 CSX600 96 core processors – ~1 DP TFLOPS peak – Approx. 500 watts – Approx. 19 TFLOPS peak in a 25 kW rack – 18 standard servers & 18 acceleration servers 11 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Top 500 Supercomputer In A Single Cabinet • 40 servers with 2.66 • Add 80 ClearSpeed GHz x86 quad-core Advance cards • 2.8 TFLOPS LINACK • 7 TFLOPS LINPACK • 26 kW • 24 kW • 10 sq. ft. • 10 sq. ft. • 800 pounds • 850 pounds • ~$400,000 • < $1,000,000 ClearSpeed increases… • Power draw by 8% • Floor space by 0% • Weight by 6% • Speed by 150% 12 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Double the Earth Simulator speed with only 1 MW November 2007: Tokyo Tech added more ClearSpeed accelerators to TSUBAME. Accelerators raise cluster performance from 38 TFLOPS to 56.4 TFLOPS with 648 ClearSpeed Advance cards – Performance increase of 48% for just a 2% increase in power consumption, 10% increase in cost – Hybrid approach: 10,368 AMD Opteron cores with just 648 ClearSpeed cards – Far smaller volume than the Earth Simulator – ClearSpeed accelerates AMBER, which is about 70% of submitted jobs Professor Matsuoka standing beside TSUBAME at Tokyo Tech 13 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Acceleration for finance and science applications • Finance – Up to 20x speedup per accelerator for Monte Carlo based analytic option pricing (per accelerator) • Universities and National Laboratories – 3x to 9x speedup for AMBER molecular modeling – Test data from major pharmaceutical company • Scalable performance – Low energy consumption supports multiple accelerators per system • Maximize performance density and energy efficiency 14 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Math functions exploit 160 GB/s Bandwidth 64-bit Function Operations per Second (Billions) 2.5 2.6 GHz dual-core Opteron 2.0 3 GHz dual-core Woodcrest ClearSpeed Advance card 1.5 1.0 0.5 0.0 Sqrt InvSqrt Exp Ln Cos Sin SinCos Inv Function name Typical speedup of ~8X over the fastest x86 processors, because math functions stay in the local memory on the card. 15 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com NAB and AMBER 10 acceleration • Newton-Raphson refinement now possible; analytically-computed second derivatives • 2.6x speedup obtained for this operation in three hours of effort (no source code changes) • Enables accurate computation of entropy and Gibbs free energy for first time. • Available now in NAB (Nucleic Acid Builder) code. Slated for addition to AMBER 10. 16 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Quantum chemistry acceleration results • DGEMM content is 18% to 65% in quantum codes like Gaussian, GAMESS, NWChem, Molpro. • Initial work with Molpro shows 9x speedup on CATS, versus a 3 GHz Intel Woodcrest server. • More modern approaches (Qbox, Car-Parinello) are 50% DGEMM with all dimensions large; host does non-DGEMM work, with net doubling of speed. 17 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com Summary • ClearSpeed’s very high ratio of flops-per-watt means much more compact HPC systems are possible, which then helps the communication issues of large clusters. • In China as in other countries, the future of HPC belongs to the technologies with the highest 64-bit power/size effectiveness. • Now seeing value for real 64-bit applications in chemistry, financial modelling, and life sciences. Mechanical engineering applications may be next. 18 Copyright © 2007 ClearSpeed Technology plc. All rights reserved. www.clearspeed.com .
Recommended publications
  • Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators
    Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators Toshio Endo Akira Nukada Graduate School of Information Science and Engineering Global Scientific Information and Computing Center Tokyo Institute of Technology Tokyo Institute of Technology Tokyo, Japan Tokyo, Japan [email protected] [email protected] Satoshi Matsuoka Naoya Maruyama Global Scientific Information and Computing Center Global Scientific Information and Computing Center Tokyo Institute of Technology/National Institute of Informatics Tokyo Institute of Technology Tokyo, Japan Tokyo, Japan [email protected] [email protected] Abstract—We report Linpack benchmark results on the Roadrunner or other systems described above, it includes TSUBAME supercomputer, a large scale heterogeneous system two types of accelerators. This is due to incremental upgrade equipped with NVIDIA Tesla GPUs and ClearSpeed SIMD of the system, which has been the case in commodity CPU accelerators. With all of 10,480 Opteron cores, 640 Xeon cores, 648 ClearSpeed accelerators and 624 NVIDIA Tesla GPUs, clusters; they may have processors with different speeds as we have achieved 87.01TFlops, which is the third record as a result of incremental upgrade. In this paper, we present a heterogeneous system in the world. This paper describes a Linpack implementation and evaluation results on TSUB- careful tuning and load balancing method required to achieve AME with 10,480 Opteron cores, 624 Tesla GPUs and 648 this performance. On the other hand, since the peak speed is ClearSpeed accelerators. In the evaluation, we also used a 163 TFlops, the efficiency is 53%, which is lower than other systems.
    [Show full text]
  • Tsubame 2.5 Towards 3.0 and Beyond to Exascale
    Being Very Green with Tsubame 2.5 towards 3.0 and beyond to Exascale Satoshi Matsuoka Professor Global Scientific Information and Computing (GSIC) Center Tokyo Institute of Technology ACM Fellow / SC13 Tech Program Chair NVIDIA Theater Presentation 2013/11/19 Denver, Colorado TSUBAME2.0 NEC Confidential TSUBAME2.0 Nov. 1, 2010 “The Greenest Production Supercomputer in the World” TSUBAME 2.0 New Development >600TB/s Mem BW 220Tbps NW >12TB/s Mem BW >400GB/s Mem BW >1.6TB/s Mem BW Bisecion BW 80Gbps NW BW 35KW Max 1.4MW Max 32nm 40nm ~1KW max 3 Performance Comparison of CPU vs. GPU 1750 GPU 200 GPU ] 1500 160 1250 GByte/s 1000 120 750 80 500 CPU CPU 250 40 Peak Performance [GFLOPS] Performance Peak 0 Memory Bandwidth [ Bandwidth Memory 0 x5-6 socket-to-socket advantage in both compute and memory bandwidth, Same power (200W GPU vs. 200W CPU+memory+NW+…) NEC Confidential TSUBAME2.0 Compute Node 1.6 Tflops Thin 400GB/s Productized Node Mem BW as HP 80GBps NW ProLiant Infiniband QDR x2 (80Gbps) ~1KW max SL390s HP SL390G7 (Developed for TSUBAME 2.0) GPU: NVIDIA Fermi M2050 x 3 515GFlops, 3GByte memory /GPU CPU: Intel Westmere-EP 2.93GHz x2 (12cores/node) Multi I/O chips, 72 PCI-e (16 x 4 + 4 x 2) lanes --- 3GPUs + 2 IB QDR Memory: 54, 96 GB DDR3-1333 SSD:60GBx2, 120GBx2 Total Perf 2.4PFlops Mem: ~100TB NEC Confidential SSD: ~200TB 4-1 2010: TSUBAME2.0 as No.1 in Japan > All Other Japanese Centers on the Top500 COMBINED 2.3 PetaFlops Total 2.4 Petaflops #4 Top500, Nov.
    [Show full text]
  • (Intel® OPA) for Tsubame 3
    CASE STUDY High Performance Computing (HPC) with Intel® Omni-Path Architecture Tokyo Institute of Technology Chooses Intel® Omni-Path Architecture for Tsubame 3 Price/performance, thermal stability, and adaptive routing are key features for enabling #1 on Green 500 list Challenge How do you make a good thing better? Professor Satoshi Matsuoka of the Tokyo Institute of Technology (Tokyo Tech) has been designing and building high- performance computing (HPC) clusters for 20 years. Among the systems he and his team at Tokyo Tech have architected, Tsubame 1 (2006) and Tsubame 2 (2010) have shown him the importance of heterogeneous HPC systems for scientific research, analytics, and artificial intelligence (AI). Tsubame 2, built on Intel® Xeon® Tsubame at a glance processors and Nvidia* GPUs with InfiniBand* QDR, was Japan’s first peta-scale • Tsubame 3, the second- HPC production system that achieved #4 on the Top500, was the #1 Green 500 generation large, production production supercomputer, and was the fastest supercomputer in Japan at the time. cluster based on heterogeneous computing at Tokyo Institute of Technology (Tokyo Tech); #61 on June 2017 Top 500 list and #1 on June 2017 Green 500 list For Matsuoka, the next-generation machine needed to take all the goodness of Tsubame 2, enhance it with new technologies to not only advance all the current • The system based upon HPE and latest generations of simulation codes, but also drive the latest application Apollo* 8600 blades, which targets—which included deep learning/machine learning, AI, and very big data are smaller than a 1U server, analytics—and make it more efficient that its predecessor.
    [Show full text]
  • Clearspeed Technical Training
    ENVISION. ACCELERATE. ARRIVE. ClearSpeed Technical Training December 2007 Overview 1 Copyright © 2007 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com Presenters Ronald Langhi Technical Marketing Manager [email protected] Brian Sumner Senior Engineer [email protected] 2 Copyright © 2007 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com ClearSpeed Technology: Company Background • Founded in 2001 – Focused on alleviating the power, heat, and density challenges of HPC systems – 103 patents granted and pending (as of September 2007) – Offices in San Jose, California and Bristol, UK 3 Copyright © 2007 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com Agenda Accelerators ClearSpeed and HPC Hardware overview Installing hardware and software Thinking about performance Software Development Kit Application examples Help and support 4 Copyright © 2007 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com ENVISION. ACCELERATE. ARRIVE. What is an accelerator? 5 Copyright © 2007 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com What is an accelerator? • A device to improve performance – Relieve main CPU of workload – Or to augment CPU’s capability • An accelerator card can increase performance – On specific tasks – Without aggravating facility limits on clusters (power, size, cooling) 6 Copyright © 2007 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com All accelerators are good… for their intended purpose Cell and GPUs FPGAs •Good for video gaming tasks •Good for integer, bit-level ops •32-bit FLOPS, not IEEE •Programming looks like circuit design •Unconventional programming model •Low power per chip, but •Small local memory 20x more power than custom VLSI •High power consumption (> 200 W) •Not for 64-bit FLOPS ClearSpeed •Good for HPC applications •IEEE 64-bit and 32-bit FLOPS •Custom VLSI, true coprocessor •At least 1 GB local memory •Very low power consumption (25 W) •Familiar programming model 7 Copyright © 2007 ClearSpeed Technology Inc.
    [Show full text]
  • TSUBAME---A Year Later
    1 TSUBAME---A Year Later Satoshi Matsuoka, Professor/Dr.Sci. Global Scientific Information and Computing Center Tokyo Inst. Technology & NAREGI Project National Inst. Informatics EuroPVM/MPI, Paris, France, Oct. 2, 2007 2 Topics for Today •Intro • Upgrades and other New stuff • New Programs • The Top 500 and Acceleration • Towards TSUBAME 2.0 The TSUBAME Production 3 “Supercomputing Grid Cluster” Spring 2006-2010 Voltaire ISR9288 Infiniband 10Gbps Sun Galaxy 4 (Opteron Dual x2 (DDR next ver.) core 8-socket) ~1310+50 Ports “Fastest ~13.5Terabits/s (3Tbits bisection) Supercomputer in 10480core/655Nodes Asia, 29th 21.4Terabytes 10Gbps+External 50.4TeraFlops Network [email protected] OS Linux (SuSE 9, 10) Unified IB NAREGI Grid MW network NEC SX-8i (for porting) 500GB 500GB 48disks 500GB 48disks 48disks Storage 1.5PB 1.0 Petabyte (Sun “Thumper”) ClearSpeed CSX600 0.1Petabyte (NEC iStore) SIMD accelerator Lustre FS, NFS, CIF, WebDAV (over IP) 360 boards, 70GB/s 50GB/s aggregate I/O BW 35TeraFlops(Current)) 4 Titech TSUBAME ~76 racks 350m2 floor area 1.2 MW (peak) 5 Local Infiniband Switch (288 ports) Node Rear Currently 2GB/s / node Easily scalable to 8GB/s / node Cooling Towers (~32 units) ~500 TB out of 1.1PB 6 TSUBAME assembled like iPod… NEC: Main Integrator, Storage, Operations SUN: Galaxy Compute Nodes, Storage, Solaris AMD: Opteron CPU Voltaire: Infiniband Network ClearSpeed: CSX600 Accel. CFS: Parallel FSCFS Novell: Suse 9/10 NAREGI: Grid MW Titech GSIC: us UK Germany AMD:Fab36 USA Israel Japan 7 TheThe racksracks werewere readyready
    [Show full text]
  • Introduction Hardware Acceleration Philosophy Popular Accelerators In
    Special Purpose Accelerators Special Purpose Accelerators Introduction Recap: General purpose processors excel at various jobs, but are no Theme: Towards Reconfigurable High-Performance Computing mathftch for acce lera tors w hen dea ling w ith spec ilidtialized tas ks Lecture 4 Objectives: Platforms II: Special Purpose Accelerators Define the role and purpose of modern accelerators Provide information about General Purpose GPU computing Andrzej Nowak Contents: CERN openlab (Geneva, Switzerland) Hardware accelerators GPUs and general purpose computing on GPUs Related hardware and software technologies Inverted CERN School of Computing, 3-5 March 2008 1 iCSC2008, Andrzej Nowak, CERN openlab 2 iCSC2008, Andrzej Nowak, CERN openlab Special Purpose Accelerators Special Purpose Accelerators Hardware acceleration philosophy Popular accelerators in general Floating point units Old CPUs were really slow Embedded CPUs often don’t have a hardware FPU 1980’s PCs – the FPU was an optional add on, separate sockets for the 8087 coprocessor Video and image processing MPEG decoders DV decoders HD decoders Digital signal processing (including audio) Sound Blaster Live and friends 3 iCSC2008, Andrzej Nowak, CERN openlab 4 iCSC2008, Andrzej Nowak, CERN openlab Towards Reconfigurable High-Performance Computing Lecture 4 iCSC 2008 3-5 March 2008, CERN Special Purpose Accelerators 1 Special Purpose Accelerators Special Purpose Accelerators Mainstream accelerators today Integrated FPUs Realtime graphics GiGaming car ds Gaming physics
    [Show full text]
  • Tokyo Tech's TSUBAME 3.0 and AIST's AAIC Ranked 1St and 3Rd on the Green500
    PRESS RELEASE Sources: Tokyo Institute of Technology National Institute of Advanced Industrial Science and Technology For immediate release: June 21, 2017 Subject line: Tokyo Tech’s TSUBAME 3.0 and AIST’s AAIC ranked 1st and 3rd on the Green500 List Highlights ►Tokyo Tech’s next-generation supercomputer TSUBAME 3.0 ranks 1st in the Green500 list (Ranking of the most energy efficient supercomputers). ►AIST’s AI Cloud, AAIC, ranks 3rd in the Green500 list, and 1st among air-cooled systems. ►These achievements were made possible through collaboration between Tokyo Tech and AIST via the AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL). The supercomputers at Tokyo Institute of Technology (Tokyo Tech) and, the National Institute of Advanced Industrial Science and Technology (AIST) have been ranked 1st and 3rd, respectively, on the Green500 List1, which ranks supercomputers worldwide in the order of their energy efficiency. The rankings were announced on June 19 (German time) at the international conference, ISC HIGH PERFORMANCE 2017 (ISC 2017), in Frankfurt, Germany. These achievements were made possible through our collaboration at the AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL), which was established on February 20th this year, headed by Director Satoshi Matsuoka. At the award ceremony (Fourth from left to right: Professor Satoshi Matsuoka, Specially Appointed Associate Professor Akira Nukada) The TSUBAME 3.0 supercomputer of the Global Scientific Information and Computing Center (GSIC) in Tokyo Tech will commence operation in August 2017; it can achieve 14.110 GFLOPS2 per watt. It has been ranked 1st on the Green500 List of June 2017, making it Japan’s first supercomputer to top the list.
    [Show full text]
  • World's Greenest Petaflop Supercomputers Built with NVIDIA Tesla Gpus
    World's Greenest Petaflop Supercomputers Built With NVIDIA Tesla GPUs GPU Supercomputers Deliver World Leading Performance and Efficiency in Latest Green500 List Leaders in GPU Supercomputing talk about their Green500 systems Tianhe-1A Supercomputer at the National Supercomputer Center in Tianjin Tsubame 2.0 from Tokyo Institute of Technology Tokyo Tech talks about their Tsubame 2.0 supercomputer - Part 1 Tokyo Tech talk about their Tsubame 2.0 supercomputer - Part 2 NEW ORLEANS, LA--(Marketwire - November 18, 2010) - SC10 -- The "Green500" list of the world's most energy-efficient supercomputers was released today, revealed that the only petaflop system in the top 10 is powered by NVIDIA® Tesla™ GPUs. The system was Tsubame 2.0 from Tokyo Institute of Technology (Tokyo Tech), which was ranked number two. "The rise of GPU supercomputers on the Green500 signifies that heterogeneous systems, built with both GPUs and CPUs, deliver the highest performance and unprecedented energy efficiency," said Wu-chun Feng, founder of the Green500 and associate professor of Computer Science at Virginia Tech. GPUs have quickly become the enabling technology behind the world's top supercomputers. They contain hundreds of parallel processor cores capable of dividing up large computational workloads and processing them simultaneously. This significantly increases overall system efficiency as measured by performance per watt. "Top500" supercomputers based on heterogeneous architectures are, on average, almost three times more power-efficient than non-heterogeneous systems. Three other Tesla GPU-based systems made the Top 10. The National Center for Supercomputing Applications (NCSA) and Georgia Institute of Technology in the U.S. and the National Institute for Environmental Studies in Japan secured 3rd, 9th and 10th respectively.
    [Show full text]
  • The Return of Acceleration Technology
    Real Science. Real Numbers. Real Software. The Return of Acceleration Technology John L. Gustafson, Ph.D. CTO, HPC ClearSpeed Technology, Inc. 1 Copyright © 2008 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com Thesis 25 years ago, transistors were expensive; accelerators arose as a way to get more performance for certain applications. Then transistors got very cheap, and CPUs subsumed the functions. Why are accelerators back? Because supercomputing is no longer limited by cost. It is now limited by heat, space, and scalability. Accelerators once again are solving these problems. 2 Copyright © 2008 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com The accelerator idea is as old as supercomputing itself 2 MB 10x speedup 3 MB/s General-purpose computer Attached vector processor Runs OS, compilers, disk, accelerates certain printers, user interface applications, but not all Even in 1977, HPC users faced issues of when it makes sense to use floating-point-intensive vector hardware. “History doesn’t repeat itself, but it does rhyme.” —Mark Twain 3 Copyright © 2008 ClearSpeed Technology Inc. All rights reserved. www.clearspeed.com From 1984… My first visit to Bristol FPS-164/MAX 0.3 GFLOPS, $500,000 • 30 double precision PEs under (1/20 price of a 1984 Cray) SIMD control; 16 KB of very high bandwidth memory per PE, but This accelerator persuaded normal bandwidth to host Dongarra to create the Level 3 • Specific to matrix multiplication BLAS operations, and to make the LINPACK benchmark • Targeted at chemistry, electromagnetics, and structural scalable… leading to the TOP500 analysis benchmark a few years later.
    [Show full text]
  • The TSUBAME Grid: Redefining Supercomputing
    The TSUBAME Grid Redefining Supercomputing < One of the world’s leading technical institutes, the Tokyo Institute of Technology (Tokyo Tech) created the fastest supercomputer in Asia, and one of the largest outside of the United States. Using Sun x64 servers and data servers deployed in a grid architecture, Tokyo Tech built a cost-effective, flexible supercomputer that meets the demands of compute- and data-intensive applications. With hundreds of systems incorporating thousands of processors and terabytes of memory, the TSUBAME grid delivers 47.38 TeraFLOPS1 of sustained performance and 1 petabyte (PB) of storage to users running common off-the-shelf applications. Supercomputing demands Not content with sheer size, Tokyo Tech was Tokyo Tech set out to build the largest, and looking to bring supercomputing to everyday most flexible, supercomputer in Japan. With use. Unlike traditional, monolithic systems numerous groups providing input into the size based on proprietary solutions that service the and functionality of the system, the new needs of the few, the new supercomputing Highlights supercomputing campus grid infrastructure architecture had to be able to run commerical off-the-shelf and open source applications, • The Tokyo Tech Supercomputer had several key requirements. Groups focused and UBiquitously Accessible Mass on large-scale, high-performance distributed including structural analysis applications like storage Environment (TSUBAME) parallel computing required a mix of 32- and ABAQUS and MSC/NASTRAN, computational redefines supercomputing 64-bit systems that could run the Linux chemistry tools like Amber and Gaussian, and • 648 Sun Fire™ X4600 servers operating system and be capable of providing statistical analysis packages like SAS, Matlab, deliver 85 TeraFLOPS of peak raw over 1,200 SPECint2000 (peak) and 1,200 and Mathematica.
    [Show full text]
  • Highlights of the 53Rd TOP500 List
    ISC 2019, Frankfurt, Highlights of June 17, 2019 the 53rd Erich TOP500 List Strohmaier ISC 2019 TOP500 TOPICS • Petaflops are everywhere! • “New” TOP10 • Dennard scaling and the TOP500 • China: Top consumer and producer ? A closer look • Green500, HPCG • Future of TOP500 Power # Site Manufacturer Computer Country Cores Rmax ST [Pflops] [MW] Oak Ridge 41 LIST: THESummit TOP10 1 IBM IBM Power System, USA 2,414,592 148.6 10.1 National Laboratory P9 22C 3.07GHz, Mellanox EDR, NVIDIA GV100 Sierra Lawrence Livermore 2 IBM IBM Power System, USA 1,572,480 94.6 7.4 National Laboratory P9 22C 3.1GHz, Mellanox EDR, NVIDIA GV100 National Supercomputing Sunway TaihuLight 3 NRCPC China 10,649,600 93.0 15.4 Center in Wuxi NRCPC Sunway SW26010, 260C 1.45GHz Tianhe-2A National University of 4 NUDT ANUDT TH-IVB-FEP, China 4,981,760 61.4 18.5 Defense Technology Xeon 12C 2.2GHz, Matrix-2000 Texas Advanced Computing Frontera 5 Dell USA 448,448 23.5 Center / Univ. of Texas Dell C6420, Xeon Platinum 8280 28C 2.7GHz, Mellanox HDR Piz Daint Swiss National Supercomputing 6 Cray Cray XC50, Switzerland 387,872 21.2 2.38 Centre (CSCS) Xeon E5 12C 2.6GHz, Aries, NVIDIA Tesla P100 Los Alamos NL / Trinity 7 Cray Cray XC40, USA 979,072 20.2 7.58 Sandia NL Intel Xeon Phi 7250 68C 1.4GHz, Aries National Institute of Advanced AI Bridging Cloud Infrastructure (ABCI) 8 Industrial Science and Fujitsu PRIMERGY CX2550 M4, Japan 391,680 19.9 1.65 Technology Xeon Gold 20C 2.4GHz, IB-EDR, NVIDIA V100 SuperMUC-NG 9 Leibniz Rechenzentrum Lenovo ThinkSystem SD530, Germany 305,856
    [Show full text]
  • Tokyo Tech Tsubame Grid Storage Implementation
    TOKYO TECH TSUBAME GRID STORAGE IMPLEMENTATION Syuuichi Ihara, Sun Microsystems Sun BluePrints™ On-Line — May 2007 Part No 820-2187-10 Revision 1.0, 5/22/07 Edition: May 2007 Sun Microsystems, Inc. Table of Contents Introduction. .1 TSUBAME Architecture and Components . .2 Compute Servers—Sun Fire™ X4600 Servers . 2 Data Servers—Sun Fire X4500 Servers . 3 Voltaire Grid Directory ISR9288 . 3 Lustre File System . 3 Operating Systems . 5 Installing Required RPMs . .6 Required RPMs . 7 Creating a Patched Kernel on the Sun Fire X4500 Servers. 8 Installing Lustre Related RPMs on Red Hat Enterprise Linux 4 . 12 Modifying and Installing the Marvell Driver for the Patched Kernel. 12 Installing Lustre Client-Related RPMs on SUSE Linux Enterprise Server 9. 14 Configuring Storage and Lustre . 17 Sun Fire x4500 Disk Management. 17 Configuring the Object Storage Server (OSS) . 21 Setting Up the Meta Data Server . 29 Configuring Clients on the Sun Fire X4600 Servers . 30 Software RAID and Disk Monitoring . 31 Summary . 33 About the Author . 33 Acknowledgements. 33 References . 34 Ordering Sun Documents . 34 Accessing Sun Documentation Online . 34 1Introduction Sun Microsystems, Inc. Chapter 1 Introduction One of the world’s leading technical institutes, the Tokyo Institute of Technology (Tokyo Tech) recently created the fastest supercomputer in Asia, and one of the largest supercomputers outside of the United States. Deploying Sun Fire™ x64 servers and data servers in a grid architecture enabled Tokyo Tech to build a cost-effective, flexible supercomputer to meet the demands of compute- and data-intensive applications. Hundreds of systems in the grid, which Tokyo Tech named TSUBAME, incorporate thousands of processors and terabytes of memory, delivering 47.38 trillion floating- point operations per second (TeraFLOPS) of sustained LINPACK benchmark performance, and is expected to reach 100 TeraFLOPS in the future.
    [Show full text]