SPEC CPU2000: Measuring CPU Performance in the New Millennium

SPEC CPU2000: Measuring CPU Performance in the New Millennium

COMPUTING PRACTICES SPEC CPU2000: Measuring CPU Performance in the New Millennium The SPEC consortium’s mission is to develop technically credible and objective benchmarks so that both computer designers and purchasers can make decisions on the basis of realistic workloads. John L. omputers perennially become more how does SPEC develop a benchmark suite and what Henning powerful, as do the software applica- do these benchmarks do? We can get a sense of the Compaq tions that run on them, and it seems process by looking over SPEC’s shoulders on one spe- Computer almost human nature to want the biggest cific day in the benchmark development process. Corp. and fastest toy we can afford. But how Cdo you know if it is? Even if your application never SPEC BENCHATHON does any I/O, it’s not just the speed of the CPU that It is 6 a.m. on a cool Thursday morning in February dictates performance—cache, main memory, and 1999. A Compaq employee prepares to shut off the compilers also play a role—and different software alarm at SPEC headquarters in Manassas, Virginia, and applications have differing performance requirements. start the day. But he finds that the alarm is already off, And whom do you trust to provide this information? because two IBM employees are still inside from the The Standard Performance Evaluation Corporation night before. A weeklong SPEC ritual is in progress: A (SPEC) is a nonprofit consortium whose members subcommittee is in town for a “benchathon,” and tech- include hardware vendors, software vendors, univer- nical activity is happening at all hours. The Compaq sities, customers, and consultants. SPEC’s mission is to employee goes to the back room, which is about 85 develop technically credible and objective component- degrees Fahrenheit (30 degrees Celsius), thanks to its and system-level benchmarks for multiple operating collection of workstations by Sun, HP, Siemens, Intel, systems and environments, including high-performance SGI, Compaq, and IBM. He opens the window to the numeric computing, Web servers, and graphical sub- cool air, and opens windows on his workstations to systems. Members agree on benchmark suites that are review the results of running Kit 60 of what will even- derived from real-world applications so that both com- tually be known as SPEC CPU2000. (SPEC will release puter designers and computer purchasers can make it 10 months later, after building Kit 98.) decisions on the basis of realistic workloads. By license The primary goal at this stage is portability for the agreement, members agree to run and report results as candidate benchmarks. As other subcommittee mem- specified by each benchmark suite. bers arrive, the Compaq employee updates the Porter’s On June 30, 2000, SPEC retired the CPU95 bench- Progress spreadsheet in preparation for today’s meet- mark suite. Its replacement is CPU2000, a new CPU ing. As of this Thursday of the benchathon week, the benchmark suite with 19 applications that have never spreadsheet shows test results for before been in a SPEC CPU suite. By continually evolv- ing these benchmarks, SPEC aims to keep pace with • 34 candidate benchmarks, the breakneck speed of technological innovation. But • 18 platforms from seven hardware vendors (11 28 Computer 0018-9162/00/$10.00 © 2000 IEEE Comparable Work In order to provide a level playing field for competition, SPEC inal algorithm—for example, by changing the above loop to use wants to ensure that comparable work is done across all tested plat- a fixed number of iterations. forms. But consider the following code fragment, which is extracted Thus, 187.facerec faced a dilemma in March 1999. The solu- (without ellipses) from a much larger section of 187.facerec: tion to this dilemma was to use a new feature of the SPEC CPU2000 tool suite—namely, file-by-file validation tolerances. If ((NewSim − OldSim) > SimThresh) Then SPEC modified facerec to output detailed data about the number CoordX (IX, IY) = NewX of iterations for individual faces to one file and a summary of total CoordY (IX, IY) = NewY iterations to another file. The two files are validated as follows: Hops = Hops + 1 Improved = .TRUE. Detail Summary EndIf reltol 0.2 0.001 Sweeps = Sweeps + 1 abstol 5 2.e − 7 If ((.NOT. Improved) .OR. skiptol 4 0 (Sweeps >= Params%Match%MaxSweeps)) Exit That is, for individual faces, the tools will accept the reported number of iterations if it matches the expected number of iter- The loop exit depends on a floating-point comparison indicat- ations within 20 percent (reltol = 0.2), or they will accept the ing an improvement in face similarity. The comparison is reported number of iterations if it is no more than five iterations affected by differences in the order and accuracy of floating- different from the expected number of iterations (abstol = 5). If point operations as implemented by different compilers and both of these checks fail, then up to four times the tools will platforms. accept any difference whatsoever (skiptol = 4). But for the over- If two systems correctly recognize a face but do a different all run, the number of iterations must match within one-tenth of number of iterations, are they doing the same work? Although 1 percent (reltol = .001), and all iterations must be checked one could argue that in some sense the work is equivalent—one (skiptol = 0). Therefore, two platforms executing 187.facerec platform just takes a different code path to get the answer— may in fact do different amounts of work in the task of match- SPEC has traditionally preferred similar code paths. But SPEC ing a single face, but they must do substantially similar amounts would also prefer to avoid artificially recoding the author’s orig- of work in the task of matching all the faces. with 32-bit and seven with 64-bit address spaces), its are exceeded. Eventually, SPEC will choose and dynamic allocation but will allow static allocation • 11 versions of Unix (three of them Linux) and for those who need it. two versions of Windows NT. The Fortran-90 applications are more difficult to port because F90 implementations are less common SPEC tests a wide variety of systems, although there and less mature than F77 implementations. One appli- are basically only two operating system families rep- cation author is a “language lawyer:” He aggressively resented at this benchathon: Unix and NT. SPEC relies uses as many features as he can from the Fortran-90 on the efforts of those who choose to participate and standard. In February 1999, only three platforms suc- cannot mandate participation for any platform. ceed on his application. Later, it will work on all tested Unfortunately, only 19 of 34 candidate benchmarks F90 compilers, and the author of 187.facerec will be are successful on all platforms. Portability is difficult proud that he uncovered bugs in many of these com- because SPEC CPU suites are not abstracted loop ker- pilers. But facerec itself will also be adjusted (see the nels but are programs for real-world problems, with “Comparable Work” sidebar). real-world portability challenges. Portability challenges C and C++. The C applications are harder to port can be roughly categorized by source code language. than the ones in either Fortran dialect. The portabil- Fortran. The Fortran-77 applications are the easiest ity issues are not uncommon: How big is a long? How to port because the language contains relatively few big is a pointer? Does this platform implement cal- machine-dependent features, and the ANSI standard is loc? Is it little endian or big endian? But the appli- more than 20 years old. Nevertheless, there are issues. cations take differing approaches to these issues, and For example, a particular application has 47,134 lines SPEC has its own requirements. For example, SPEC of code, 123 source files, and hard-to-debug wrong prefers the ANSI standard, but some programs still answers when optimization is enabled for one SPEC have widespread K&R vestiges. Eventually, the ves- member’s compiler. Later, the compiler will be blamed, tiges that actually cause problems will be prioritized the application will be exonerated, and the 200.sixtrack for removal, and most of the rest will remain benchmark will ship with CPU2000. unchanged. Some programs use a tailoring process, Several F77 applications allocate 200 megabytes often called configure, but SPEC prefers to minimize of memory. When the allocation is static, a member source code differences. SPEC avoids configure, and complains that executables take too much disk space; looks for other portability methods, such as addi- when it is dynamic, another member’s OS stack lim- tional #ifdef directives. July 2000 29 allowing six differences out of 3.9 million numbers Table 1. February 1999 benchathon results. printed. 19 Feb 26 Feb Compile errors 22 2 BENCHMARK SELECTION Runtime errors 18 6 Porting is a clearly technical activity, with a rea- Validation errors 60 41 sonably simple completeness criterion: Does the Total 100 49 benchmark work? By contrast, benchmark selection involves multiple, sometimes conflicting criteria, which may lack simple answers. The CPU2000 The C++ applications are the biggest challenge. The benchmarks (Table 2) were selected from a much standard is new, runtime functions (class libraries) are larger collection of candidates, submitted by mem- diverse and hard to compare, and SPEC received only bers and by the general public through a search two C++ applications. One of these has already been process announced at SPEC’s Web site. Ultimately, voted out: It worked with gnu g++ but proved imprac- benchmarks are selected by vote, and members may tical to port to ANSI C++. The other—252.eon—is weigh the criteria differently.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us