The TSUBAME Grid: Redefining Supercomputing

Total Page:16

File Type:pdf, Size:1020Kb

The TSUBAME Grid: Redefining Supercomputing The TSUBAME Grid Redefining Supercomputing < One of the world’s leading technical institutes, the Tokyo Institute of Technology (Tokyo Tech) created the fastest supercomputer in Asia, and one of the largest outside of the United States. Using Sun x64 servers and data servers deployed in a grid architecture, Tokyo Tech built a cost-effective, flexible supercomputer that meets the demands of compute- and data-intensive applications. With hundreds of systems incorporating thousands of processors and terabytes of memory, the TSUBAME grid delivers 47.38 TeraFLOPS1 of sustained performance and 1 petabyte (PB) of storage to users running common off-the-shelf applications. Supercomputing demands Not content with sheer size, Tokyo Tech was Tokyo Tech set out to build the largest, and looking to bring supercomputing to everyday most flexible, supercomputer in Japan. With use. Unlike traditional, monolithic systems numerous groups providing input into the size based on proprietary solutions that service the and functionality of the system, the new needs of the few, the new supercomputing Highlights supercomputing campus grid infrastructure architecture had to be able to run commerical off-the-shelf and open source applications, • The Tokyo Tech Supercomputer had several key requirements. Groups focused and UBiquitously Accessible Mass on large-scale, high-performance distributed including structural analysis applications like storage Environment (TSUBAME) parallel computing required a mix of 32- and ABAQUS and MSC/NASTRAN, computational redefines supercomputing 64-bit systems that could run the Linux chemistry tools like Amber and Gaussian, and • 648 Sun Fire™ X4600 servers operating system and be capable of providing statistical analysis packages like SAS, Matlab, deliver 85 TeraFLOPS of peak raw over 1,200 SPECint2000 (peak) and 1,200 and Mathematica. compute capacity SPECfp2000 (peak) performance per CPU, • 42 Sun Fire X4500 Data Servers combining for over 20,000 SPECfp_rate2000 The TSUBAME supercomputing grid provide access to 1 petabyte of (peak) and 36 teraflops sustained Linpack The ninth largest supercomputer in the networked storage performance across the system. Each server in world today as measured by TOP5002, the • ClearSpeed Advance accelerator the grid had to incorporate at least eight CPUs TSUBAME grid is powered by 648 Sun Fire™ boards configured in 360 and 16 GB of shared access memory, with over X4600 servers with 11,088 AMD Opteron™ compute nodes help the grid half the servers capable of 32 GB, and total processor cores and 21 terabytes of memory. exceed 47 TeraFLOPS sustained Linpack performance grid memory of 5 TB or more. With all systems interconnected via InfiniBand technology and capable of • Eight Voltaire Grid Director ISR9288 high-speed InfiniBand switches With a wide range of researchers throughout accessing 1 petabyte of hard disk storage in keep traffic in the grid moving the university accessing the system, as well parallel, the TSUBAME grid delivers 47.38 as collaborators all over the world, data TeraFLOPS sustained performance. • Sun N1™ Grid Engine software distributes jobs across systems in storage was a key concern. Over a petabyte of Integrated by NEC and incorporating the grid physical storage capacity was required, with technology from ClearSpeed Technology, • An innovative and integrated no data loss across the entire system of 1,000 Inc., ClusterFS, and Voltaire, as well as the software stack enables common years. A parallel file system with a total RAID Sun N1™ System Manager and Sun N1 Grid off-the-shelf applications, I/O transfer rate of 5 GB/second was needed Engine software, the TSUBAME grid can run including PC applications, to run to support over 1,000 NFS mount points along both the Solaris™ Operating System (OS) and on the grid with fast parallel file systems like Lustre. Linux to deliver applications to users and speed scientific algorithms and data processing. 2 The TSUBAME Grid sun.com/hpc TSUBAME grid system architecture By integrating high-performance AMD Opteron 648 servers. Designed by Sun, the TSUBAME grid consists processors with massive data storage, Sun Fire of 648 Sun Fire X4600 servers running SuSE X4500 servers provide high storage density and 21 TB memory. Linux Enterprise Server 9 SP3 configured into fast throughput rates at nearly half the cost of capacity, capability, and shared memory traditional solutions. In fact, these systems 1 PB data storage. clusters. Together, these systems provide users deliver four-way x64 server performance and access to 11,088 high-performance, dual-core, up to 24 TB direct attached storage in a 4U 47.38 TeraFLOPS. Next-Generation AMD Opteron processors and form factor, with 1 GB/second throughput All in 35 days. 21 TB of memory. Each Sun Fire X4600 server from disks to network and 2 GB/second incorporates two PCI-Express 4x single data throughput to memory. Sun Fire X4500 servers The InfiniBand connectivity schema is designed rate (SDR) InfiniBand host adapters for support up to 16 GB of DDR-400 memory with to provide the TSUBAME grid with optimum connection to the network. ECC. network balancing, maximum availability, and high performance. All Sun Fire X4500 and Sun High-performance x64 compute servers High-speed InfiniBand interconnect Fire X4600 servers are connected to one of the Sun Fire X4600 servers are fast and energy All Sun Fire X4600 compute systems and Sun six edge InfiniBand switches. These six efficient, and are the only four-way x64 servers Fire X4500 data servers are connected to an switches are in turn connected to two Voltaire to scale to 16-way in a compact 4RU form InfiniBand network through eight Voltaire Grid ISR9288 core switches. With 24 links between factor. Indeed, this powerful rackmount server Director ISR9288 high-speed InfiniBand each edge and core switch, the system has a scales quickly from four to eight sockets, switches. Each switch provides 20 Gbps blocking factor of 5:1 and a maximum of nine simply by adding modular processor boards. bidirectional bandwidth for 288 InfiniBand node hops. Multiple paths are available This innovative design enables Sun Fire X4600 ports in a single 14U chassis, enabling 1,352 through the core switch, fostering high systems to be upgraded and scaled to next server and storage links. Up to 11.52 Tbps full availability. Also, each InfiniBand host adapter generation processors and memory without bisectional switch bandwidth in a fat-tree installed in Sun Fire X4600 compute servers is disrupting the existing software and network architecture is possible, with less than 420 attached to a different line board. As a result, environment. Sun Fire X4600 servers support nanoseconds of latency between any two each link is connected to one of the 24 chips up to 64 GB of DDR-400 memory with ECC. ports. As a result, Voltaire ISR9288 switches on each line board, providing optimum can be interconnected to form large clusters distribution of the edge switches. In the TSUBAME grid, 360 compute servers are consisting of thousands of nodes. configured with a ClearSpeed Advance accelerator board for added floating-point performance. The accelerator board combines ClearSpeed CSX600 Sun Fire X4600 (648 Nodes) two CSX600 processors in a PCI-X form factor Accelerators and delivers 96 GFlops theoretical peak performance and 50 GFlops sustained double- precision matrix multiply (DGEMM of BLAS) performance while averaging 25 Watts power Infiniband Network (1440 Gbps) consumption. Voltaire ISR 9288 (8) Inifiniband Switches Ultra high-density storage External Network Forty-two high-performance Sun Fire X4500 Devices servers running RedHat Enterprise Linux 4 External Grid Connectivity provide storage for the TSUBAME grid. These Storage Server A Storage Server B high density data servers all incorporate 48 Sun Fire X4500 (42) NEC iStorage S1800AT direct attached, hot-swappable 500 GB SATA drives, for a total storage capacity of 1 PB. Each Sun Fire X4500 server also includes one Figure 1. The TSUBAME grid system architecture PCI-X 4x SDR InfiniBand host adapter. 3 The TSUBAME Grid sun.com/hpc TSUBAME grid software • PGI 6.1 and GNU (gcc) compilers are Making the grid accessible A wide variety of software packages run on the installed on all compute nodes in the cluster. What makes TSUBAME unique is its ability to compute and data servers and work together • A variety of Message Passing Interface (MPI) make vast computing and storage resources to make the TSUBAME grid widely accessible tools, such as MPICH, OpenMPI, and HP-MPI available to a wide range of users running off- to users. are installed for application portability. Some the-shelf applications with ease. The Sun N1 of these tools utilize the IP over InfiniBand Grid Engine software makes this possible by Compute server software stack (IPoIB) protocol rather than native InfiniBand managing how jobs are allocated to systems in All Sun Fire X4600 servers in the TSUBAME grid protocols. the grid — without users needing to know the run the SuSE Linux Enterprise Server 9 SP3 • A Voltaire ibhost tool enables applications to underlying details of where jobs run. environment, as well as the following: employ MPI communication over the InfiniBand network. Based on MVAPICH, the By using the Sun N1 Grid Engine software, the • Sun N1 Grid Engine 6.0 software provides Voltaire implemenation includes several physical systems that comprise the TSUBAME distributed resource management for user enhancements for the TSUBAME grid, grid can be viewed logically (Figure 2). Users jobs running on the grid. The Sun N1 Grid including support for two accelerator cards log in to the grid via login nodes that are load Engine software runs on a Sun Fire X4100 in a single system, a shared receive queue, balanced via a round robin policy. Sessions are management server within the grid. and adaptive FASTPATH. then transferred to an interactive node by the • Lustre client software provides access to the Sun N1 Grid Engine software.
Recommended publications
  • Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators
    Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators Toshio Endo Akira Nukada Graduate School of Information Science and Engineering Global Scientific Information and Computing Center Tokyo Institute of Technology Tokyo Institute of Technology Tokyo, Japan Tokyo, Japan [email protected] [email protected] Satoshi Matsuoka Naoya Maruyama Global Scientific Information and Computing Center Global Scientific Information and Computing Center Tokyo Institute of Technology/National Institute of Informatics Tokyo Institute of Technology Tokyo, Japan Tokyo, Japan [email protected] [email protected] Abstract—We report Linpack benchmark results on the Roadrunner or other systems described above, it includes TSUBAME supercomputer, a large scale heterogeneous system two types of accelerators. This is due to incremental upgrade equipped with NVIDIA Tesla GPUs and ClearSpeed SIMD of the system, which has been the case in commodity CPU accelerators. With all of 10,480 Opteron cores, 640 Xeon cores, 648 ClearSpeed accelerators and 624 NVIDIA Tesla GPUs, clusters; they may have processors with different speeds as we have achieved 87.01TFlops, which is the third record as a result of incremental upgrade. In this paper, we present a heterogeneous system in the world. This paper describes a Linpack implementation and evaluation results on TSUB- careful tuning and load balancing method required to achieve AME with 10,480 Opteron cores, 624 Tesla GPUs and 648 this performance. On the other hand, since the peak speed is ClearSpeed accelerators. In the evaluation, we also used a 163 TFlops, the efficiency is 53%, which is lower than other systems.
    [Show full text]
  • Tsubame 2.5 Towards 3.0 and Beyond to Exascale
    Being Very Green with Tsubame 2.5 towards 3.0 and beyond to Exascale Satoshi Matsuoka Professor Global Scientific Information and Computing (GSIC) Center Tokyo Institute of Technology ACM Fellow / SC13 Tech Program Chair NVIDIA Theater Presentation 2013/11/19 Denver, Colorado TSUBAME2.0 NEC Confidential TSUBAME2.0 Nov. 1, 2010 “The Greenest Production Supercomputer in the World” TSUBAME 2.0 New Development >600TB/s Mem BW 220Tbps NW >12TB/s Mem BW >400GB/s Mem BW >1.6TB/s Mem BW Bisecion BW 80Gbps NW BW 35KW Max 1.4MW Max 32nm 40nm ~1KW max 3 Performance Comparison of CPU vs. GPU 1750 GPU 200 GPU ] 1500 160 1250 GByte/s 1000 120 750 80 500 CPU CPU 250 40 Peak Performance [GFLOPS] Performance Peak 0 Memory Bandwidth [ Bandwidth Memory 0 x5-6 socket-to-socket advantage in both compute and memory bandwidth, Same power (200W GPU vs. 200W CPU+memory+NW+…) NEC Confidential TSUBAME2.0 Compute Node 1.6 Tflops Thin 400GB/s Productized Node Mem BW as HP 80GBps NW ProLiant Infiniband QDR x2 (80Gbps) ~1KW max SL390s HP SL390G7 (Developed for TSUBAME 2.0) GPU: NVIDIA Fermi M2050 x 3 515GFlops, 3GByte memory /GPU CPU: Intel Westmere-EP 2.93GHz x2 (12cores/node) Multi I/O chips, 72 PCI-e (16 x 4 + 4 x 2) lanes --- 3GPUs + 2 IB QDR Memory: 54, 96 GB DDR3-1333 SSD:60GBx2, 120GBx2 Total Perf 2.4PFlops Mem: ~100TB NEC Confidential SSD: ~200TB 4-1 2010: TSUBAME2.0 as No.1 in Japan > All Other Japanese Centers on the Top500 COMBINED 2.3 PetaFlops Total 2.4 Petaflops #4 Top500, Nov.
    [Show full text]
  • (Intel® OPA) for Tsubame 3
    CASE STUDY High Performance Computing (HPC) with Intel® Omni-Path Architecture Tokyo Institute of Technology Chooses Intel® Omni-Path Architecture for Tsubame 3 Price/performance, thermal stability, and adaptive routing are key features for enabling #1 on Green 500 list Challenge How do you make a good thing better? Professor Satoshi Matsuoka of the Tokyo Institute of Technology (Tokyo Tech) has been designing and building high- performance computing (HPC) clusters for 20 years. Among the systems he and his team at Tokyo Tech have architected, Tsubame 1 (2006) and Tsubame 2 (2010) have shown him the importance of heterogeneous HPC systems for scientific research, analytics, and artificial intelligence (AI). Tsubame 2, built on Intel® Xeon® Tsubame at a glance processors and Nvidia* GPUs with InfiniBand* QDR, was Japan’s first peta-scale • Tsubame 3, the second- HPC production system that achieved #4 on the Top500, was the #1 Green 500 generation large, production production supercomputer, and was the fastest supercomputer in Japan at the time. cluster based on heterogeneous computing at Tokyo Institute of Technology (Tokyo Tech); #61 on June 2017 Top 500 list and #1 on June 2017 Green 500 list For Matsuoka, the next-generation machine needed to take all the goodness of Tsubame 2, enhance it with new technologies to not only advance all the current • The system based upon HPE and latest generations of simulation codes, but also drive the latest application Apollo* 8600 blades, which targets—which included deep learning/machine learning, AI, and very big data are smaller than a 1U server, analytics—and make it more efficient that its predecessor.
    [Show full text]
  • Sun SPARC Enterprise T5440 Servers
    Sun SPARC Enterprise® T5440 Server Just the Facts SunWIN token 526118 December 16, 2009 Version 2.3 Distribution restricted to Sun Internal and Authorized Partners Only. Not for distribution otherwise, in whole or in part T5440 Server Just the Facts Dec. 16, 2009 Sun Internal and Authorized Partner Use Only Page 1 of 133 Copyrights ©2008, 2009 Sun Microsystems, Inc. All Rights Reserved. Sun, Sun Microsystems, the Sun logo, Sun Fire, Sun SPARC Enterprise, Solaris, Java, J2EE, Sun Java, SunSpectrum, iForce, VIS, SunVTS, Sun N1, CoolThreads, Sun StorEdge, Sun Enterprise, Netra, SunSpectrum Platinum, SunSpectrum Gold, SunSpectrum Silver, and SunSpectrum Bronze are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. UNIX is a registered trademark in the United States and other countries, exclusively licensed through X/Open Company, Ltd. T5440 Server Just the Facts Dec. 16, 2009 Sun Internal and Authorized Partner Use Only Page 2 of 133 Revision History Version Date Comments 1.0 Oct. 13, 2008 - Initial version 1.1 Oct. 16, 2008 - Enhanced I/O Expansion Module section - Notes on release tabs of XSR-1242/XSR-1242E rack - Updated IBM 560 and HP DL580 G5 competitive information - Updates to external storage products 1.2 Nov. 18, 2008 - Number
    [Show full text]
  • Sun Fire X4500 Server Linux and Solaris OS Installation Guide
    Sun Fire X4500 Server Linux and Solaris OS Installation Guide Sun Microsystems, Inc. www.sun.com Part No. 819-4362-17 March 2009, Revision A Submit comments about this document at: http://www.sun.com/hwdocs/feedback Copyright © 2009 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. This distribution may include materials developed by third parties. Sun, Sun Microsystems, the Sun logo, Java, Netra, Solaris, Sun Ray and Sun Fire X4500 Backup Server are trademarks or registered trademarks of Sun Microsystems, Inc., and its subsidiaries, in the U.S. and other countries. This product is covered and controlled by U.S. Export Control laws and may be subject to the export or import laws in other countries. Nuclear, missile, chemical biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export or reexport to countries subject to U.S. embargo or to entities identified on U.S. export exclusion lists, including, but not limited to, the denied persons and specially designated nationals lists is strictly prohibited. Use of any spare or replacement CPUs is limited to repair or one-for-one replacement of CPUs in products exported in compliance with U.S. export laws. Use of CPUs as product upgrades unless authorized by the U.S. Government is strictly prohibited. Copyright © 2009 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, Etats-Unis. Tous droits réservés. Cette distribution peut incluire des élements développés par des tiers. Sun, Sun Microsystems, le logo Sun, Java, Netra, Solaris, Sun Ray et Sun Fire X4500 Backup Server sont des marques de fabrique ou des marques déposées de Sun Microsystems, Inc., et ses filiales, aux Etats-Unis et dans d'autres pays.
    [Show full text]
  • Repository Design Report with Attached Metadata Plan
    National Digital Information Infrastructure and Preservation Program Preserving Digital Public Television Project REPOSITORY DESIGN REPORT WITH ATTACHED METADATA PLAN Prepared by Joseph Pawletko, Software Systems Architect, New York University With contributions by Nan Rubin, Project Director, Thirteen/WNET-TV Kara van Malssen, Senior Research Scholar, New York University 2010-03-19 Table of Contents Executive Summary ………………………………………………………………………………………… 2 1. Background ……………………………………………………………………………………………..... 3 2. OAIS Reference Model and Terms …………………………………………………………………… 3 3. Preservation Repository Design ……………………………………………………………………….. 3 4. Core Archival Information Package Structure …………………………………………………….. 11 5. PDPTV Archival Information Package Structure …………………………………………………... 14 6. PDPTV Submission Information Packages …………………………………………………………... 17 7. PDPTV Archival Information Package Generation and Metadata Plan ……………………... 18 8. Content Transfer to the Library of Congress .……………………………………………………….. 22 9. Summary …………………………………………………………………………………………………… 23 10. Remaining Work …………………………………………………………………………………………. 30 11. Acknowledgements ……………………………………………………………………………………. 31 Appendix 1 Glossary and OAIS Operation ……………………………………………...…………… 32 Appendix 2 WNET NAAT 002405 METS File ……………………………………………………………. 37 Appendix 3 WNET NAAT 002405 PBCore File …………………………………………………………. 41 Appendix 4 WNET NAAT 002405 SD Broadcast Master PREMIS File ...……………………………. 46 Appendix 5 WNET NAAT 002405 SD Broadcast Master METSRights File …………………………. 48 Appendix 6 WGBH
    [Show full text]
  • TSUBAME---A Year Later
    1 TSUBAME---A Year Later Satoshi Matsuoka, Professor/Dr.Sci. Global Scientific Information and Computing Center Tokyo Inst. Technology & NAREGI Project National Inst. Informatics EuroPVM/MPI, Paris, France, Oct. 2, 2007 2 Topics for Today •Intro • Upgrades and other New stuff • New Programs • The Top 500 and Acceleration • Towards TSUBAME 2.0 The TSUBAME Production 3 “Supercomputing Grid Cluster” Spring 2006-2010 Voltaire ISR9288 Infiniband 10Gbps Sun Galaxy 4 (Opteron Dual x2 (DDR next ver.) core 8-socket) ~1310+50 Ports “Fastest ~13.5Terabits/s (3Tbits bisection) Supercomputer in 10480core/655Nodes Asia, 29th 21.4Terabytes 10Gbps+External 50.4TeraFlops Network [email protected] OS Linux (SuSE 9, 10) Unified IB NAREGI Grid MW network NEC SX-8i (for porting) 500GB 500GB 48disks 500GB 48disks 48disks Storage 1.5PB 1.0 Petabyte (Sun “Thumper”) ClearSpeed CSX600 0.1Petabyte (NEC iStore) SIMD accelerator Lustre FS, NFS, CIF, WebDAV (over IP) 360 boards, 70GB/s 50GB/s aggregate I/O BW 35TeraFlops(Current)) 4 Titech TSUBAME ~76 racks 350m2 floor area 1.2 MW (peak) 5 Local Infiniband Switch (288 ports) Node Rear Currently 2GB/s / node Easily scalable to 8GB/s / node Cooling Towers (~32 units) ~500 TB out of 1.1PB 6 TSUBAME assembled like iPod… NEC: Main Integrator, Storage, Operations SUN: Galaxy Compute Nodes, Storage, Solaris AMD: Opteron CPU Voltaire: Infiniband Network ClearSpeed: CSX600 Accel. CFS: Parallel FSCFS Novell: Suse 9/10 NAREGI: Grid MW Titech GSIC: us UK Germany AMD:Fab36 USA Israel Japan 7 TheThe racksracks werewere readyready
    [Show full text]
  • Tokyo Tech's TSUBAME 3.0 and AIST's AAIC Ranked 1St and 3Rd on the Green500
    PRESS RELEASE Sources: Tokyo Institute of Technology National Institute of Advanced Industrial Science and Technology For immediate release: June 21, 2017 Subject line: Tokyo Tech’s TSUBAME 3.0 and AIST’s AAIC ranked 1st and 3rd on the Green500 List Highlights ►Tokyo Tech’s next-generation supercomputer TSUBAME 3.0 ranks 1st in the Green500 list (Ranking of the most energy efficient supercomputers). ►AIST’s AI Cloud, AAIC, ranks 3rd in the Green500 list, and 1st among air-cooled systems. ►These achievements were made possible through collaboration between Tokyo Tech and AIST via the AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL). The supercomputers at Tokyo Institute of Technology (Tokyo Tech) and, the National Institute of Advanced Industrial Science and Technology (AIST) have been ranked 1st and 3rd, respectively, on the Green500 List1, which ranks supercomputers worldwide in the order of their energy efficiency. The rankings were announced on June 19 (German time) at the international conference, ISC HIGH PERFORMANCE 2017 (ISC 2017), in Frankfurt, Germany. These achievements were made possible through our collaboration at the AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL), which was established on February 20th this year, headed by Director Satoshi Matsuoka. At the award ceremony (Fourth from left to right: Professor Satoshi Matsuoka, Specially Appointed Associate Professor Akira Nukada) The TSUBAME 3.0 supercomputer of the Global Scientific Information and Computing Center (GSIC) in Tokyo Tech will commence operation in August 2017; it can achieve 14.110 GFLOPS2 per watt. It has been ranked 1st on the Green500 List of June 2017, making it Japan’s first supercomputer to top the list.
    [Show full text]
  • World's Greenest Petaflop Supercomputers Built with NVIDIA Tesla Gpus
    World's Greenest Petaflop Supercomputers Built With NVIDIA Tesla GPUs GPU Supercomputers Deliver World Leading Performance and Efficiency in Latest Green500 List Leaders in GPU Supercomputing talk about their Green500 systems Tianhe-1A Supercomputer at the National Supercomputer Center in Tianjin Tsubame 2.0 from Tokyo Institute of Technology Tokyo Tech talks about their Tsubame 2.0 supercomputer - Part 1 Tokyo Tech talk about their Tsubame 2.0 supercomputer - Part 2 NEW ORLEANS, LA--(Marketwire - November 18, 2010) - SC10 -- The "Green500" list of the world's most energy-efficient supercomputers was released today, revealed that the only petaflop system in the top 10 is powered by NVIDIA® Tesla™ GPUs. The system was Tsubame 2.0 from Tokyo Institute of Technology (Tokyo Tech), which was ranked number two. "The rise of GPU supercomputers on the Green500 signifies that heterogeneous systems, built with both GPUs and CPUs, deliver the highest performance and unprecedented energy efficiency," said Wu-chun Feng, founder of the Green500 and associate professor of Computer Science at Virginia Tech. GPUs have quickly become the enabling technology behind the world's top supercomputers. They contain hundreds of parallel processor cores capable of dividing up large computational workloads and processing them simultaneously. This significantly increases overall system efficiency as measured by performance per watt. "Top500" supercomputers based on heterogeneous architectures are, on average, almost three times more power-efficient than non-heterogeneous systems. Three other Tesla GPU-based systems made the Top 10. The National Center for Supercomputing Applications (NCSA) and Georgia Institute of Technology in the U.S. and the National Institute for Environmental Studies in Japan secured 3rd, 9th and 10th respectively.
    [Show full text]
  • Новые Системы Sun+Intel+AMD
    Новые системы Sun+Intel+AMD Ратмир Трошин Менеджер по развитию бизнеса Sun Microsystems 1 • Быстрое развертывание: “Запустив единожды, используй где угодно” • Высокая вычислителльная плотность: более 500 процессоров , 2000 ядер или 8000 потоков! • Многообразие и гибкость: “Что угодно, где угодно, когда угодно”. • Экономичный и экологичный • Быстрое развертывание: “Запустив единожды, используй где угодно” • Высокая вычислителльная плотность: более 500 процессоров , 2000 ядер или 8000 потоков! • Многообразие и гибкость: “Что Серверы угодно, где угодно, когда угодно”. + Архивация • Экономичный и экологичный + Сеть + Питание + Охлаждение + ПО Sun Proprietary/Confidential: Internal Use Only = Project Blackbox Sun на рынке x86 – стремимся выше! IDC Server Factory Revenue Rank 2003 2004 2005 2006 As of June 2007 #1 HP HP HP HP HP #2 Dell Dell Dell Dell IBM #3 IBM IBM IBM IBM Dell #4 Fujitsu/FS Fujitsu/FS Fujitsu/FS Fujitsu/FS Sun #5 NEC WW NEC WW NEC WW NEC WW Fujitsu/FS #6 Unisys NCR Sun Sun #7 NCR Unisys #8 Hitachi Hitachi #9 Acer Sun #10 Toshiba #11 Gateway #12 Lenovo #13 Maxdata #14 Groupe Bull Source: IDC WW Quarterly Server Tracker, Q2CY07 – all x86 revenue. Not approved by IDC for external use. Includes Thumper and Blades Уникальные системы Sun Sun Fire Открытая Sun SPARC Система X4600 платформа Enterprise на открытых Платформа M9000 стандартах виртуали- Blade зации Sun Blade 6000 Сервер Sun Fire X4450 хранения данных Sun Fire X4500 Высочайшая Основа для Высочайшая произв-сть сверхмощ- выч. обработки ных плотность видео кластеров Сервер
    [Show full text]
  • Sun HPC Software, Linux Edition 1.0, Installation Guide
    SunTM HPC Software, Linux Edition 1.0 Installation Guide Sun Microsystems, Inc. www.sun.com Part No. 820-5451-10 July 2008 Copyright © 2008 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. U.S. Government Rights - Commercial software. Government users are subject to the Sun Microsystems, Inc. standard license agreement and applicable provisions of the FAR and its supplements. This distribution may include materials developed by third parties. Sun, Sun Microsystems, the Sun logo, and Lustre are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. Products covered by and information contained in this service manual are controlled by U.S. Export Control laws and may be subject to the export or import laws in other countries. Nuclear, missile, chemical biological weapons or nuclear maritime end uses or end users, whether direct or indirect, are strictly prohibited. Export or reexport to countries subject to U.S. embargo or to entities identified on U.S. export exclusion lists, including, but not limited to, the denied persons and specially designated nationals lists is strictly prohibited. DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. This product includes source code for the Berkeley Database, a product of Sleepycat Software, Inc. Your development of software that uses the Berkeley Database application programming interfaces is subject to additional licensing conditions and restrictions imposed by Sleepycat Software Inc.
    [Show full text]
  • Solving the HPC I/O Bottleneck: Sun Lustre Storage System
    SOLVING THE HPC I/O BOTTLENECK: SUN™ LUSTRE™ STORAGE SYSTEM Sean Cochrane, Global HPC Sales Ken Kutzer, HPC Marketing Lawrence McIntosh, Engineering Solutions Group Sun BluePrints™ Online Part No 820-7664-20 Revision 2.0, 11/12/09 Sun Microsystems, Inc. Table of Contents Solving the HPC I/O Bottleneck: Sun Lustre Storage System ...............................1 Target Environments ........................................................................................... 1 The Lustre File System ......................................................................................... 2 Lustre File System Design ................................................................................ 3 Sun and Open Storage..................................................................................... 4 Sun Lustre Storage System Overview .................................................................... 5 Design Considerations ..................................................................................... 6 Hardware Components.................................................................................... 8 HA MDS Module ......................................................................................... 8 Standard OSS Module ................................................................................. 9 HA OSS Module ........................................................................................ 11 Software Components .................................................................................. 14 Performance
    [Show full text]