NCCS Hardware

NCCS Hardware

NCCS Hardware Presented by Jim Rogers Director of Operations National Center for Computational Sciences NCCS resources October 2007 Control network summary 1 GigE 10 GigE routers UltraScience Network 7 Systems CRAY XT4 CRAY XIE TRACK BLUE/GENE P IBM LINUX VISUALIZATION IBM JAGUAR PHOENIX II NSTG CLUSTER HPSS Supercomputers 51,110 CPUs 68 TB Memory 4512 quad-core 2048 quad-core processors @ processors @ (23,662) (1,024) 2.3 GHz 850 MHz (56) (128) Many storage 336 TFlops 2.6GHz 0.5 GHz 18,048 GB 4096 GB 3 GHz 2.2 GHz devices 45 TB Memory 2 TB Memory Memory Memory 76 GB Memory 128 GB Memory supported 900 TB 44 TB 4.5 TB 9 TB 5 TB Total shared disk 250.5 TB Evaluation platforms Test systems Scientific visualization Backup • 144-processor Cray • 96-processor Cray XT3 lab storage 5 PB XD1 with FPGAs 35 megapixels data storage • SRC Mapstation Powerwall 5 PB • Clearspeed • BlueGene (at ANL) 2 Rogers_NCCS_Hardware_SC07 Hardware roadmap As it looks to the future, the National Center for Computational Sciences expects to lead the accelerating field of high-performance computing. Upgrades will boost Jaguar’s performance to 250 teraflops by the end of 2007, followed by installation of two separate petascale systems in 2009. 3 Rogers_NCCS_Hardware_SC07 Jaguar system specifications 54 TF 100 TF 250 TF 1.000 TF Compute 5,212 Dual-core 11,508 Dual-core 7,816 Quad-core 24,000 Multi-core processors 2.6 GHz Opteron 2.6 GHz Opterons Opterons Opterons SIO 82 Single-core 116 Dual-core 124 Dual-core 544 Quad-core processors 2.4 GHz Opteron 2.6 GHz Opteron Opteron Opteron Memory per 4 GB / 4 GB / 8 GB / 32 GB / socket/total 20 TB total system 45 TB total system 63 TB total system 768 TB total system Interconnect Seastar 1 Seastar 2 Seastar 2 bandwidth per Gemini socket 1.8 GB/s 4.0 GB/s 4.0 GB/s Disk space 120 TB 900 TB 900 TB 5-15 PB Disk 14 GB/s 5 GB/s Total 55 GB/s 240 GB/s bandwidth 4 Rogers_NCCS_Hardware_SC07 Phoenix – Cray X1E CRAY X1E: 1,024 vector processors, 18.5 TF • Ultra-high bandwidth • Globally addressable memory • Addresses large-scale problems that cannot be done on any other computer 5 Rogers_NCCS_Hardware_SC07 Jaguar – Cray XT4 Today: • 120 TF Cray XT4 • 2.6 GHz dual-core AMD Opteron processors • 11,508 compute nodes • 900 TB disk • 124 cabinets • Currently partitioned as 96 cabinets of catamount and 32 cabinets of compute node linux • November 2007: All cabinets compute node linux 6 Rogers_NCCS_Hardware_SC07 Jaguar – Cray XT4 architecture Cray XT4 scalable architecture • Designed to scale to 10,000s of processors • Measured MPI bandwidth of 1.8 GB/s • 3-D torus topology 7 Rogers_NCCS_Hardware_SC07 Phoenix – Cray X1E Phoenix Utilization by Discipline Astro 100 Atomic 90 Biology Chemistry 80 Climate Fusion ) 70 % Industry ( 60 n Materials o i t a 50 Other z i l i t U 40 30 20 10 0 Oct Dec Feb Apr Jun Aug 8 Rogers_NCCS_Hardware_SC07 Jaguar – Cray XT4 Jaguar Utilization by Discipline Astro 100 Biology Chemistry 90 Climate Combustion 80 Fusion Industry ) 70 Materials % ( Nuclear n 60 o i Other t a 50 z i l i t U 40 30 20 10 0 Oct Dec Feb Apr Jun Aug 9 Rogers_NCCS_Hardware_SC07 Contact Jim Rogers Director of Operations National Center for Computational Sciences (865) 576-2978 [email protected] 10 Rogers_NCCS_Hardware_SC07.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us