Performance Comparison of Cray X1 and Cray Opteron Cluster with Other Leading Platforms using HPCC and IMB Benchmarks* Subhash Saini1, Rolf Rabenseifner3, Brian T. N. Gunney2, Thomas E. Spelce2, Alice Koniges2, Don Dossa2, Panagiotis Adamidis3, Robert Ciotti1, 3 4 5 Sunil R. Tiyyagura , Matthias Müller , and Rod Fatoohi 1Advanced Supercomputing Division NASA Ames Research Center Moffett Field, California 94035, USA {ssaini, ciotti}@nas.nasa.gov 4ZIH, TU Dresden. Zellescher Weg 12, D-01069 Dresden, Germany 2Lawrence Livermore National Laboratory
[email protected] Livermore, California 94550, USA {gunney, koniges, dossa1}@llnl.gov 5San Jose State University 3 One Washington Square High-Performance Computing-Center (HLRS) San Jose, California 95192 Allmandring 30, D-70550 Stuttgart, Germany
[email protected] University of Stuttgart {adamidis, rabenseifner, sunil}@hlrs.de Abstract tem performance and global network issues [1,2]. In this paper, we use the HPCC suite as a first comparison of sys- tems. Additionally, the message-passing paradigm has The HPC Challenge (HPCC) benchmark suite and the become the de facto standard in programming high-end Intel MPI Benchmark (IMB) are used to compare and parallel computers. As a result, the performance of a ma- evaluate the combined performance of processor, memory jority of applications depends on the performance of the subsystem and interconnect fabric of six leading MPI functions as implemented on these systems. Simple supercomputers - SGI Altix BX2, Cray X1, Cray Opteron bandwidth and latency tests are two traditional metrics for Cluster, Dell Xeon cluster, NEC SX-8 and IBM Blue assessing the performance of the interconnect fabric of the Gene/L.