
2007 SPEC Benchmark Workshop January 21, 2007 Radisson Hotel Austin North TheThe HPCHPC ChallengeChallenge Benchmark:Benchmark: AA CandidateCandidate forfor ReplacingReplacing LINPACKLINPACK inin thethe TOP500?TOP500? Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 1 OutlineOutline -- TheThe HPCHPC ChallengeChallenge Benchmark:Benchmark: AA CandidateCandidate forfor ReplacingReplacing LinpackLinpack inin thethe TOP500?TOP500? ♦ Look at LINPACK ♦ Brief discussion of DARPA HPCS Program ♦ HPC Challenge Benchmark ♦ Answer the Question 2 WhatWhat IsIs LINPACK?LINPACK? ♦ Most people think LINPACK is a benchmark. ♦ LINPACK is a package of mathematical software for solving problems in linear algebra, mainly dense linear systems of linear equations. ♦ The project had its origins in 1974 ♦ LINPACK: “LINear algebra PACKage” ¾ Written in Fortran 66 3 ComputingComputing inin 19741974 ♦ High Performance Computers: ¾ IBM 370/195, CDC 7600, Univac 1110, DEC PDP-10, Honeywell 6030 ♦ Fortran 66 ♦ Run efficiently ♦ BLAS (Level 1) ¾ Vector operations ♦ Trying to achieve software portability ♦ LINPACK package was released in 1979 4 ¾ About the time of the Cray 1 TheThe AccidentalAccidental BenchmarkerBenchmarker ♦ Appendix B of the Linpack Users’ Guide ¾ Designed to help users extrapolate execution time for Linpack software package ♦ First benchmark report from 1977; ¾ Cray 1 to DEC PDP-10 Dense matrices Linear systems Least squares problems Singular values 5 LINPACKLINPACK Benchmark?Benchmark? ♦ The LINPACK Benchmark is a measure of a computer’s floating-point rate of execution for solving Ax=b. ¾ It is determined by running a computer program that solves a dense system of linear equations. ♦ Information is collected and available in the LINPACK Benchmark Report. ♦ Over the years the characteristics of the benchmark has changed a bit. ¾ In fact, there are three benchmarks included in the Linpack Benchmark report. ♦ LINPACK Benchmark since 1977 ¾ Dense linear system solve with LU factorization using partial pivoting ¾ Operation count is: 2/3 n3 + O(n2) ¾ Benchmark Measure: MFlop/s ¾ Original benchmark measures the execution rate for a Fortran program on a matrix of size 100x100. 6 ForFor LinpackLinpack withwith nn == 100100 ♦ Not allowed to touch the code. ♦ Only set the optimization in the compiler and run. ♦ Provide historical look at computing ♦ Table 1 of the report (52 pages of 95 page report) ¾ http://www.netlib.org/benchmark/performance.pdf 7 LinpackLinpack BenchmarkBenchmark OverOver TimeTime ♦ In the beginning there was only the Linpack 100 Benchmark (1977) ¾ n=100 (80KB); size that would fit in all the machines ¾ Fortran; 64 bit floating point arithmetic ¾ No hand optimization (only compiler options); source code available ♦ Linpack 1000 (1986) ¾ n=1000 (8MB); wanted to see higher performance levels ¾ Any language; 64 bit floating point arithmetic ¾ Hand optimization OK ♦ Linpack Table 3 (Highly Parallel Computing - 1991) (Top500; 1993) ¾ Any size (n as large as you can; n=106; 8TB; ~6 hours); ¾ Any language; 64 bit floating point arithmetic ¾ Hand optimization OK ¾ Strassen’s method not allowed (confuses the operation count and rate) ¾ Reference implementation available ||Ax− b || = O(1) ♦ In all cases results are verified by looking at: ||Axn |||| || ε 21 2 8 ♦ Operations count for factorization nn 32 − ; solve 2 n 32 MotivationMotivation forfor AdditionalAdditional BenchmarksBenchmarks ♦ From Linpack Benchmark and Linpack Benchmark Top500: “no single number can ♦ Good reflect overall performance” ¾ One number ¾ Simple to define & easy to rank ♦ Clearly need something more ¾ Allows problem size to change than Linpack with machine and over time ¾ Stresses the system with a run ♦ HPC Challenge Benchmark of a few hours ¾ Test suite stresses not only ♦ Bad the processors, but the ¾ Emphasizes only “peak” CPU memory system and the speed and number of CPUs interconnect. ¾ The real utility of the HPCC ¾ Does not stress local bandwidth benchmarks are that ¾ Does not stress the network architectures can be described ¾ Does not test gather/scatter with a wider range of metrics than just Flop/s from Linpack. ¾ Ignores Amdahl’s Law (Only does weak scaling) ♦ Ugly ¾ MachoFlops 9 ¾ Benchmarketeering hype AtAt TheThe TimeTime TheThe LinpackLinpack BenchmarkBenchmark WasWas CreatedCreated …… ♦ If we think about computing in late 70’s ♦ Perhaps the LINPACK benchmark was a reasonable thing to use. ♦ Memory wall, not so much a wall but a step. ♦ In the 70’s, things were more in balance ¾ The memory kept pace with the CPU ¾ n cycles to execute an instruction, n cycles to bring in a word from memory ♦ Showed compiler optimization ♦ Today provides a historical base of data 10 ManyMany ChangesChanges ♦ Many changes in our hardware over the past 30 years Top500 Systems/Architectures 500 ¾ Superscalar, Vector, Const. 400 Distributed Memory, Cluster Shared Memory, 300 Multicore, … MPP 200 SMP ♦ While there has been 100 SIMD some changes to the 0 Single Proc. Linpack Benchmark not 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 all of them reflect the advances made in the hardware. ♦ Today’s memory hierarchy is much more complicated. 11 HighHigh ProductivityProductivity ComputingComputing SystemsSystems Goal:Goal: ProvideProvide aa generationgeneration ofof economicallyeconomically viableviable highhigh productivityproductivity computingcomputing systemssystems forfor thethe nationalnational securitysecurity andand industrialindustrial useruser communitycommunity (2010;(2010; startedstarted inin 2002)2002) Focus on: s & Ass nalysi essme A ry R nt ¾Real (not peak) performance of critical national Indust &D Programming Performance Models security applications Characterization & Prediction Hardware ¾ Intelligence/surveillance Technology System Architecture ¾ Reconnaissance Software Technology ¾ Cryptanalysis Ind ustry R&D ¾ Weapons analysis Anal sment ysis & Asses ¾ Airborne contaminant modeling ¾ Biotechnology HPCS Program Focus Areas ¾Programmability: reduce cost and time of developing applications ¾Software portability and system robustness Applications: Intelligence/surveillance, reconnaissance, cryptanalysis, weapons analysis, airborne contaminant modeling and biotechnology FillFill thethe CriticalCritical TechnologyTechnology andand CapabilityCapability GapGap TodayToday (late(late 80's80's HPCHPC Technology)Technology) ...... toto ...... FutureFuture (Quantum/Bio(Quantum/Bio Computing) Computing) HPCSHPCS RoadmapRoadmap ¾ 5 vendors in phase 1; 3 vendors in phase 2; 1+ vendors in phase 3 ¾ MIT Lincoln Laboratory leading measurement and evaluation team Petascale Systems Full Scale Development TBD Validated Procurement Evaluation Methodology team Advanced Test Evaluation Design & Framework Prototypes team Concept New Evaluation Framework Study team Today Phase 1 Phase 2 Phase 3 $20M (2002) $170M (2003-2005) (2006-2010) ~$250M each 13 Predicted Performance Levels for Top500 100,000 10,000 1,000 TFlop/s Linpack 100 10 1 3,447 Jun-03 4,648 Dec-03 6,267 294 Jun-04 405 Dec-04 33 557 Jun-05 44 Dec-05 2.86 59 Jun-06 3.95 5.46 Dec-06 Total Jun-07 #1 #10 Dec-07 #500 Total Jun-08 Pred. #1 Pred. #10 Dec-08 Pred. #500 Jun-09 14 AA PetaFlopPetaFlop ComputerComputer bbyy thethe EndEnd ofof thethe DecadeDecade ♦ At least 10 Companies developing a Petaflop system in the next decade. ¾ Cray 2+ Pflop/s Linpack ¾ IBM 6.5 PB/s data streaming BW 3.2 PB/s Bisection BW ¾ Sun } 64,000 GUPS ¾ Dawning ¾ Galactic Chinese Companies ¾ Lenovo } ¾ Hitachi Japanese ¾ NEC “Life Simulator” (10 Pflop/s) ¾ Fujitsu } Keisoku project $1B 7 years ¾ Bull 15 PetaFlopPetaFlop ComputersComputers inin 22 Years!Years! ♦ Oak Ridge National Lab ¾ Planned for 4th Quarter 2008 (1 Pflop/s peak) ¾ From Cray’s XT family ¾ Use quad core from AMD ¾ 23,936 Chips ¾ Each chip is a quad core-processor (95,744 processors) ¾ Each processor does 4 flops/cycle ¾ Cycle time of 2.8 GHz ¾ Hypercube connectivity ¾ Interconnect based on Cray XT technology ¾ 6MW, 136 cabinets ♦ Los Alamos National Lab ¾ Roadrunner (2.4 Pflop/s peak) ¾ Use IBM Cell and AMD processors ¾ 75,000 cores 16 HPCHPC ChallengeChallenge GoalsGoals ♦ To examine the performance of HPC architectures using kernels with more challenging memory access patterns than the Linpack Benchmark ¾ The Linpack benchmark works well on all architectures ― even cache-based, distributed memory multiprocessors due to 1. Extensive memory reuse 2. Scalable with respect to the amount of computation 3. Scalable with respect to the communication volume 4. Extensive optimization of the software ♦ To complement the Top500 list ♦ Stress CPU, memory system, interconnect ♦ Allow for optimizations ¾ Record effort needed for tuning ¾ Base run requires MPI and BLAS 17 ♦ Provide verification & archiving of results Tests on Single Processor and System ● Local - only a single processor (core) is performing computations. ● Embarrassingly Parallel - each processor (core) in the entire system is performing computations but they do no communicate with each other explicitly. ● Global - all processors in the system are performing computations and they explicitly communicate with each other. HPCHPC ChallengeChallenge BenchmarkBenchmark Consists of basically 7 benchmarks; ¾ Think of it as a framework or harness for adding benchmarks of interest. 1. LINPACK (HPL) ― MPI Global (Ax = b) 2. STREAM ― Local; single CPU *STREAM ― Embarrassingly parallel 3. PTRANS (A A + BT) ― MPI Global 4. RandomAccess ― Local; single CPU *RandomAccess ― Embarrassingly
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages47 Page
-
File Size-