Understanding PARSEC Performance on Contemporary Cmps

Understanding PARSEC Performance on Contemporary Cmps

Understanding PARSEC Performance on Contemporary CMPs Major Bhadauria, Vincent M. Weaver Sally A. McKee Cornell University Chalmers University of Technology {major vince}@csl.cornell.edu [email protected] Abstract forms, each representative of different design choices that can greatly affect workloads. PARSEC applications exhibit PARSEC is a reference application suite used in industry different memory behaviors, different data sharing patterns, and academia to assess new Chip Multiprocessor (CMP) and different workload partitions from most other bench- designs. No investigation to date has profiled PARSEC on mark suites in common use (see Section 2), which is pre- real hardware to better understand scaling properties and cisely why we choose to study them on current hardware bottlenecks. This understanding is crucial in guiding future in order to learn how to better design future hardware for CMP designs for these kinds of emerging workloads. We user and commercial workloads, as opposed to just the com- use hardware performance counters, taking a systems-level monly studied HPC workloads. approach and varying common architectural parameters: We investigate several aspects of contemporary CMPs: number of out-of-order cores, memory hierarchy configu- memory speeds, SMT performance, out-of-order execution, rations, number of multiple simultaneous threads, number and performance scaling with increasing number of threads. of memory channels, and processor frequencies. We find We specifically examine current micro-architectural re- these programs to be largely compute-bound, and thus lim- source bottlenecks. In doing so, we identify: ited by number of cores, micro-architectural resources, and cache-to-cache transfers, rather than by off-chip memory • bottlenecks in current systems for particular PARSEC or system bus bandwidth. Half the suite fails to scale lin- applications; early with increasing number of threads, and some applica- • machine aspects that have little effect on PARSEC ap- tions saturate performance at few threads on all platforms plication performance (and thus are not good candi- tested. Exploiting thread level parallelism delivers greater dates for future architectural enhancement for these payoffs than exploiting instruction level parallelism. To re- types of workloads); and duce power and improve performance, we recommend in- • resources crucial to good scaling properties for PAR- creasing the number of arithmetic units per core, increasing SEC applications. support for TLP, and reducing support for ILP. The PARSEC applications exhibit differing memory be- haviors: some are bandwidth limited, performing clustered accesses to large amounts of data (with little locality); oth- 1 Introduction ers are latency limited, with some enjoying good locality (which the memory subsystem can exploit), and others ex- The PARSEC (Princeton Application Repository for hibiting scattered accesses with little locality (impeding any Shared-Memory Computers) benchmark suite [4] attempts memory backend from masking latencies). to represent both current and emerging workloads for mul- Making good use of memory resources is necessary, but tiprocessing hardware. The suite is intended to help drive more important is these applications’ ability to exploit par- CMP design research in both industry and academia. For allelism. In particular, we find that exploiting thread-level these applications to help guide future designs, users must parallelism (TLP) over instruction-level parallelism (ILP) first understand which hardware resources are critical for greatly benefits these workloads. Larger caches have little desired system behavior, particularly as applications need effect on performance, and thus we conclude that design to scale with increasing number of processing cores. Study- time and transistors are likely better spent on increasing ing application behavior on existing systems representative logic to improve TLP, reducing speculative logic intended to of contemporary design trends is a good starting point for exploit ILP, and prefetching into local caches (since cache developing the foundations of this understanding. We there- accesses tend to have high locality, and misses do not cause fore evaluate PARSEC on several contemporary CMP plat- invalidations in other caches). 2 Related Work Alam et al. [1] characterize performance of scientific workloads on multicore chips. Iyer et al. [10] explore re- Most architectural studies, including workload studies, source bottlenecks for commercial Java servers, examining use simulation to gather data about the applications in ques- tradeoffs of cores, caches, and memory. tion. If done well, this can provide information that cannot All non-PARSEC studies discussed herein investigate be obtained by running applications on native hardware. On special-purpose workloads; we investigate a broader range the other hand, running codes on real platforms provides of workloads (always on actual hardware, and never in sim- some information unobtainable (or unreliable) from simu- ulation) to determine which architectural features are most lation. Whichever method is used, care must be taken to crucial for general-purpose CMPs. ensure accuracy of results (e.g., OS noise must not distort hardware measurements, and simulation results — espe- 3 Experimental Setup cially from partial simulations — are subject to both inher- ent simulator error as well as methodological errors). Here We evaluate the PARSEC codes according to a set of we focus on the applications and kernels used to study ex- metrics spanning CMP performance, and, in particular, af- isting and proposed hardware performance. A discussion of fecting scaling properties as applications grow to greater using simulation versus hardware performance monitoring numbers of threads. We consider three properties with re- counters is beyond the scope of this paper [18]. spect to cache performance: number of evictions compared Bienia et al. [4] study PARSEC in detail using func- to total accesses, bus invalidation requests (but not snoop tional simulation, and (separately) compare PARSEC to the traffic), and data bus traffic from writebacks and core-to- SPLASH-2 benchmarks [3, 19], finding that PARSEC ap- core transfers. Since memory tends to bottleneck appli- plications exhibit more intense sharing among threads, ex- cations, we examine two metrics with respect to DRAMs ploit a pipelined programming model for several programs, and thread performance: reducing bandwidth and reducing and use larger memory footprints that cause higher cache memory speeds. With respect to throughput, we examine miss rates over other benchmark suites. We gather addi- chip traffic ratios (main memory accesses per retired in- tional results, using real hardware to cover metrics that can- struction) to quantify how dependent an application is on not be obtained via purely functional simulation. memory. Most importantly, we measure performance scal- The NAS Parallel Benchmarks [2] (NPBs) represent ing as number of threads increases for each application. We early efforts to evaluate performance of highly parallel examine bottlenecks, and identify whether they are soft- platforms. The NPBs address shortcomings of previous ware or hardware artifacts. We investigate performance of benchmarks, including the Livermore Fortran Kernels [13] scaling number of threads via Simultaneous Multithread- (the “Livermore Loops”), LINPACK (Fortran subroutines to ing (SMT) versus increasing individual cores. With respect solve a variety of linear equations and least-squares prob- to how micro-architecture resources affect performance, we lems), and LINPACK’s successor LAPACK (the “Linear examine processor stalls due to lack of resources, including Algebra Pack”, which explicitly exploits shared memory). Re-Order Buffer (ROB) hazards, unavailable Reservation These suites largely target vector supercomputers, and are Station (RS) slots, unavailable Load/Store Queue (LSQ) insufficient (especially with respect to problem size) for slots, and contention for Floating Point (FP) units. studying scalability of emerging supercomputers. Further- PARSEC consists of 12 applications from Princeton, In- more, full application benchmarks are difficult to port and tel, and Stanford. V1.0 of the suite represents a diverse lack automatic parallelization tools (or they did at the time set of commercial and emerging workloads. Table 1 lists of benchmark introduction). The NPBs thus emerged as the the applications; for more information, please see Bienia benchmarks of choice for evaluating large scale parallel ma- et al. [4]. We use publicly available tools and hardware to chines. They also enjoy widespread use in the evaluation of ensure reproducible results. We use gcc 4.2 C and C++ proposed machine designs (including CMPs), but specifi- compilers with Linux kernel 2.6.22.18. The only exception cally target scientific computations (e.g., large-scale com- is our Solaris machine, which uses the Sun C compiler. We putational fluid dynamics applications). Like SPLASH- compile in 64-bit mode using statically linked libraries. All 2 [19], they exhibit different performance behaviors from experiments use full native input sets. PARSEC, which explicitly tries to evaluate machines not We test a range of system configurations that vary solely designed for scientific supercomputing. in memory hierarchy, system frequency, and micro- MiBench [9], MediaBench [11] (with an advertised up- architecture. Table 2 lists machines and relevant architec- grade

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us