Seminario EEUU/Venezuela de Computación de Alto Rendimiento 2000 US/Venezuela Workshop on High Performance Computing 2000 - WoHPC 2000 4 al 6 de Abril del 2000 Puerto La Cruz Venezuela Jack Dongarra University of Tennessee and Oak Ridge National Laboratory http://www.cs.utk.edu/~dongarra/ 1 ♦ Look at trends in HPC Ø Top500 statistics ♦ NetSolve Ø Example of grid middleware ♦ Performance on today’s architecture Ø ATLAS effort ♦ Tools for performance evaluation Ø Performance API (PAPI) 2 - Started in 6/93 by JD,Hans W. Meuer and Erich Strohmaier - Basis for analyzing the HCP market - Quantify observations - Detection of trends (market, architecture, technology) 3 - Listing of the 500 most powerful Computers in the World - Yardstick: Rmax from LINPACK MPP Ax=b, dense problem TPP performance - Updated twice a year Rate Size SC‘xy in November Meeting in Mannheim in June - All data available from www.top500.org 4 ♦ Original classes of ♦ A way for machines tracking trends Ø Sequential Ø SMPs Øin performance Ø MPPs Ø in market Ø SIMDs Øin classes of ♦ Two new classes HPC systems Ø Beowulf-class ØArchitecture systems Ø ØTechnology Clustering of SMPs and DSMs ØRequires additional terminology 5 Ø “Constellation” ClusterCluster ofof ClustersClusters ♦ An ensemble of N nodes each comprising p computing elements ♦ The p elements are tightly bound shared memory (e.g. smp, dsm) ♦ The N nodes are loosely coupled, i.e.: distributed memory 4TF Blue Pacific SST 3 x 480 4-way SMP nodes ♦ p is greater than N 3.9 TF peak performance 2.6 TB memory ♦ Distinction is which 2.5 Tb/s bisectional bandwidth 62 TB disk layer gives us the most 6.4 GB/s delivered I/O bandwidth power through parallelism 6 Gflop/s Intel ASCI Red Xeon (9632) 2000 ASCI Blue Pacific SST(5808) 1500 SGI ASCI Blue Mountain (5040) 1000 Intel ASCI Red (9152) 750 Hitachi CP-PACS (2040) 500 Intel Paragon (6788) Fujitsu VPP-500 (140) TMC CM-5 (1024) 250 TMC CM-2 NEC SX-3 (4) Fujitsu VP-2600 (2048) 100 Cray Y-MP (8) 0 91 93 Year 95 97 99 MANU- RMAX AREA OF RANK COMPUTER INSTALLATION SITE COUNTRY YEAR # PROC FACTURER [GF/S] INSTALLATION 1 Intel ASCI Red 2379.6 Sandia National Labs USA 1999 Research 9632 Albuquerque ASCI Blue- Lawrence Livermore 2 IBM Pacific SST, 2144 USA 1999 Research 5808 National Laboratory IBM SP 604E 3 SGI ASCI Blue 1608 Los Alamos National Lab USA 1998 Research 6144 Mountain 4 SGI T3E 1200 891.5 Government USA 1998 Classified 1084 5 Hitachi SR8000 873.6 University of Tokyo Japan 1999 Academic 128 6 SGI T3E 900 815.1 Government USA 1997 Classified 1324 7 SGI Orgin 2000 690.9 Los Alamos National Lab USA 1999 Research 2048 /ACL Naval Oceanographic Research 8 Cray/SGI T3E 900 675.7 USA 1999 1084 Office, Bay Saint Louis Weather 9 SGI T3E 1200 671.2 Deutscher Wetterdienst Germany 1999 Research 812 Weather 10 IBM SP Power3 558.13 UCSD/San Diego USA 1999 Research 1024 Supercomputer Center, IBM/Poughkeepsie 100 Tflop/s 10 Tflop/s 1 Tflop/s 100 Gflop/s 10 Gflop/s 1 Gflop/s Intel XP/S140 Sandia 100 Mflop/s Fujitsu 'NWT' NAL SNI VP200EX [60G - 400 M][2.4 Tflop/s 33 Gflop/s],[1Uni 40 Dresden Tf] Schwab #12, 1/2 each year, 100 > 100 Gf, faster than Moore’s law SUM Fujitsu Jun-93 'NWT' NAL Cray Y-MP Nov-93 KFA JülichM94/4 N=1 Hitachi/Tsukuba Jun-94 CP-PACS/2048 Y-MP C94/364 Cray Nov-94 'EPA' USA N=500 ASCI RedIntel Jun-95 Sandia 50.97 TF/s SGI CHALLANGEPOWER Nov-95 GOODYEAR ASCI RedIntel 2.38 TF/s Sun Ultra Sandia Jun-96 HPC 1000 Nov-96 InternationalNews ASCI RedIntel Sandia Jun-97 HPC 10000Sun 33.1 GF/s Merril Lynch Nov-97 IBM 604e 69 proc Jun-98 Nabisco Nov-98 Jun-99 Nov-99 9 1000000 100000 10000 1000 100 Sum Performance [GFlop/s] 10 N=1 N=10 1 0.1 Earth Simulator N=500 Jun-93 ASCI Nov-94 1 TFlop/s Jun-96 1 PFlop/s Nov-97 Jun-99 Entry 1 T 2005 and 1 P 2010 Nov-00 Jun-02 Nov-03 Jun-05 Nov-06 Jun-08 Nov-09 10 500 400 SIMD 300 200 100 0 Constellation ProcessorSingle Jun-93 Nov-93 Jun-94 MPP Nov-94 Jun-95 SMP Cluster Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Nov-98 Jun-99 Nov-99 11 500 400 300 200 100 Intel TMC 0 "Japan Inc" Jun-93 Cray SGI others Nov-93 Jun-94 IBM Convex/HP Nov-94 Jun-95 Nov-95 Sun Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 TaraN o-> Crayv- 9Inc8 $15M Jun-99 Nov-99 12 500 400 300 200 100 0 Jun-93 Inmos Transputer Nov-93 intel Other Jun-94 Nov-94 Jun-95 SUN Nov-95 MIPS Jun-96 HP Nov-96 IBM Jun-97 Alpha Nov-97 Jun-98 Nov-98 Jun-99 Nov-99 500 400 300 200 100 0 ClassifiedVendor Jun-93 Academic Nov-93 Jun-94 Industry Nov-94 Jun-95 Research Nov-95 Jun-96 Nov-96 Jun-97 Nov-97 Jun-98 Nov-98 Jun-99 Nov-99 14 ♦ Clustering of shared memory machines for scalability Ø Emergence of PC commodity systems Ø Pentium/Alpha based, Linux or NT driven Ø “Supercomputer performance at mail-order prices” Ø Beowulf-Class Systems (Linux+PC) Ø Distributed Shared Memory (clusters of processors connected) Ø Shared address space w/deep memory hierarchy ♦ Efficiency of message passing and data parallel programming Ø Helped by standards efforts such as PVM, MPI, Open-MP and HPF ♦ Many of the machines as a single user environments ♦ Pure COTS 15 MANU- AREA OF # RANK COMPUTER RMAX INSTALLATION SITE COUNTRY YEAR FACTURER INSTALLATION PROC 33 Sun HPC 450 272.1 Sun, Burlington USA 1999 Vendor 720 Cluster Compaq Computer Corp. 34 Compaq Alpha Server SC 271.4 USA 1999 Vendor 512 Littleton … … … … … … … … … Sandia National 44 Self-made Cplant Cluster 232.6 USA 1999 Research 580 Laboratories … … … … … … … … … Institute of Physical and 169 Self-made Alphleet Cluster 61.3 Japan 1999 Research 140 Chemical Res. (RIKEN) … … … … … … … … … Los Alamos National Lab/ 265 Self-made Avalon Cluster 48.6 USA 1998 Research 140 CNLS … … … … … … … … … 351 Siemens hpcLine Cluster 41.45 Universitaet Paderborn/PC2 Germany 1999 Academic 192 … … … … … … … … … 454 Self-made Parnass2 Cluster 34.23 University of Bonn/ Germany 1999 Academic 128 Applied Mathematic 16 ♦ Allow networked resources to be integrated into the desktop. ♦ Not just hardware, but also make available software resources. ♦ Locate and “deliver” software or solutions to the user in a directly usable and “conventional” form. ♦ Part of the motivation - software maintenance 17 18 ♦ Three basic scenarios: Ø Client, servers and agents anywhere on Internet (3(10)-150(80-ws/mpp)-Mcell) Ø Client, servers and agents on an Intranet Ø Client, server and agent on the same machine ♦ “Blue Collar” Grid Based Computing Ø User can set things up, no “su” required Ø Doesn’t require deep knowledge of network programming ♦ Focus on Matlab users Ø OO language, objects are matrices (pse, eg os) Ø One of the most popular desktop systems for 19 numerical computing, 400K Users • No knowledge of networking involved • Hide complexity of numerical software • Computation location transparency • Provides access to Virtual Libraries : • Component grid-based framework • Central management of library resources • User not concerned with most up-to-date version • Automatic tie to Netlib repository in project • Provides synchronous or asynchronous calls (User level parallelism) 20 >> define sparse matrix A >> define rhs …>> [x, its] = netsolve('itmeth',’petsc’, A, rhs, 1.e-6 call NETSL(‘DGESV()’,NSINFO, N,1,A,MAX,IPIV,B,MAX,INFO) Asynchronous Calls also available Computational Server : • Various Software resources installed on various Hardware Resources • Configurable and Extensible : • Framework to easily add arbitrary software ... • Many numerical libraries being integrated by the NetSolve team • Many software being integrated by users Agent : • Gateway to the computational servers • Performs Load Balancing among the resources http://www.cs.utk.edu/netsolve NetSolve agent : predicts the execution times and sorts the servers Prediction for a server based on : • Its distance over the network - Latency and Bandwidth - Statistical Averaging • Its performance (LINPACK benchmark) • Its workload • The problem size and the algorithm complexity Quick estimate Cached data workload out of date ? http://www.cs.utk.edu/netsolve UniversityUniversity ofof Tennessee’sTennessee’s GridGrid Prototype:Prototype: ScalableScalable IntracampusIntracampus ResearchResearch Grid:Grid: SInRGSInRG 24 command1(A,sequence(A, B, B) E) Client Server result C netsl_begin_sequence( ); input A, netsl(“command1”, A, B, C); command2(A,intermediate output C) C netsl(“command1”, A, B, C); netsl(“command2”, A, C, D); Client Server netsl(“command3”, D, E, F); result D intermediate output D, netsl_end_sequence(C, D); input E command3(D, E) Client Server result F 25 NetSolve Client Client-Proxy Client-Proxy Client-Proxy Client-Proxy Interface Interface Interface Interface NetSolve Globus Proxy Ninf Legion Proxy Proxy Proxy GRAM NetSolve Services Ninf GASS Services Legion NetSolve Agent MDS Services NetSolve Servers 26 ♦ Kerberos used to maintain Access Control Lists and manage access to computational resources. ♦ NetSolve properly handles authorized and non-authorized components together in the same system. ♦ In use by DOD Modernization program at Army Research Lab 27 ♦ Multiple requests to single problem. ♦ Previous Solution: – Many calls to netslnb( ); /* non-blocking */ ♦ New Solution: –– SingleSingle callcall toto netsl_farm(netsl_farm( );); 28 SCIRun torso defibrillator application (A(A bitbit inin thethe future)future) Matrix ♦xx == netsolvenetsolve(‘linear(‘linear system’,A,b)system’,A,b) type Ø NetSolve examines the Sparse user’s data together Dense with the resources available (in the grid sense) and makes Direct Iterative decisions on best time to solution dynamically.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages50 Page
-
File Size-