Multiprogramming Performance of the Pentium 4 with Hyper-Threading

Multiprogramming Performance of the Pentium 4 with Hyper-Threading

In the Third Annual Workshop on Duplicating, Deconstructing and Debunking (WDDD2004) held at ISCA '04. pp 53{62 Multiprogramming Performance of the Pentium 4 with Hyper-Threading James R. Bulpin∗and Ian A. Pratt University of Cambridge Computer Laboratory J J Thomson Avenue, Cambridge, UK, CB3 0FD. Tel: +44 1223 331859. [email protected] Abstract 1 Introduction Intel Corporation's \Hyper-Threading" technol- Simultaneous multithreading (SMT) is a very fine ogy [6] introduced into the Pentium 4 [3] line of pro- grained form of hardware multithreading that allows cessors is the first commercial implementation of si- simultaneous execution of more than one thread with- multaneous multithreading (SMT). SMT is a form out the notion of an internal context switch. The of hardware multithreading building on dynamic is- fine grained sharing of processor resources means that sue superscalar processor cores [15, 14, 1, 5]. The threads can impact each others' performance. main advantage of SMT is its ability to better utilise processor resources and to hide memory hierarchy Tuck and Tullsen first published measurements of the latency by being able to provide more independent performance of the SMT Pentium 4 processor with work to keep the processor busy. Other architectures Hyper-Threading [12]. Of particular interest is their for simultaneous multithreading and hardware mul- evaluation of the multiprogrammed performance of tithreading in general are described elsewhere [16]. the processor by concurrently running pairs of single- Hyper-Threading currently supports two heavy threaded benchmarks. In this paper we present experi- weight threads (processes) per processor, presenting ments and results obtained independently that confirm the abstraction of two independent logical processors. their observations. We extend the measurements to The physical processor contains a mixture of dupli- consider the mutual fairness of simultaneously execut- cated (per-thread) resources such as the instruction ing threads (an area hinted at but not covered in detail queue; shared resources tagged by thread number by Tuck and Tullsen) and compare the multiprogram- such as the DTLB and trace cache; and dynamically ming performance of pairs of benchmarks running on shared resources such as the execution units. The the Hyper-Threaded SMT system and on a compara- resource partitioning is summarised in table 1. The ble SMP system. scheduling of instructions to execution units is pro- cess independent although there are limits on how We show that there can be considerable bias in the many instructions each process can have queued to performance of simultaneously executing pairs and try to maintain fairness. investigate the reasons for this. We show that the performance gap between SMP and Hyper-Threaded Whilst the logical processors are functionally in- SMT for multiprogrammed workloads is often lower dependent, contention for resources will affect the than might be expected, an interesting result given progress of the processes. Compute-bound processes the obvious economic and energy consumption advan- will suffer contention for execution units while pro- tages of the latter. cesses making more use of memory will contend for use of the cache with the possible result of increased capacity and conflict misses. With cooperating pro- ∗James Bulpin is funded by a CASE award from Marconi cesses the sharing of the cache may be useful but for Corporation plc. and EPSRC two arbitrary processes the contention may have a 1 Duplicated Shared Tagged/Partitioned Fetch ITLB Microcode ROM Trace cache Streaming buffers Branch Return stack buffer Global history array prediction Branch history buffer Decode State Logic uOp queue (partitioned) Execute Register rename Instruction schedulers Retirement Reorder buffer (up to 50% use per thread) Memory Caches DTLB Table 1: Resource division on Hyper-Threaded P4 processors. negative effect. Round-robin context switching with the non- • simultaneous sharing of the caches. In general, multi-threaded processors are best ex- ploited by running cooperating threads such as a true The obvious result is that in general the per-thread multi-threaded program. In reality however many and aggregate performance will be the highest on the workloads will be single threaded. With Intel now in- SMP system. However at a practical level, particu- corporating Hyper-Threading into the Pentium 4 line larly for the mass desktop market, one must consider of processors aimed at desktop PCs it is likely that the economic advantages of a single physical proces- many common workloads will be single-threaded or sor SMT system. It is therefore useful to know how at least have a dominant main thread. much better the SMP performance is. In this paper we present measurements of the perfor- mance of a real SMT system. This is preferable to 2 Related Work simulation as using an actual system is the only way to guarantee that all contributory factors are cap- tured in the measurements. Of particular interest was Much of the early simultaneous multithreading the effect of the operating system and the memory (SMT) work studied the performance of various hierarchy, features sometimes simplified or ignored benchmarks in order to demonstrate the effectiveness in simulation studies. Furthermore most simulation of the architecture [15, 14, 5]. Necessarily these stud- studies are based on SMTSIM [13] which differs from ies were simulation based and considered the applica- Intel's Hyper-Threading in a number of major ways tion code rather than the entire system including the including the number of threads available, the degree operating system effects. Whilst useful, the results of dynamic sharing and the instruction set architec- from these studies do not directly apply to current ture. hardware as the microarchitecture and implementa- tion can drastically change the behaviour. A study of this nature is useful as an aide to un- Snavely et al. use the term \symbiosis" to describe derstanding the benefits and limitations of Hyper- how concurrently running threads can improve the Threading. This work is part of a larger study throughput of each other [8]. They demonstrated the of practical operating system support for Hyper- effect on the Tera MTA and a simulated SMT pro- Threading. The observations from the experiments cessor. described here are helping to drive the design of a Hyper-Threading-aware process scheduler. Redstone et al. investigated the performance of work- loads running on a simulated SMT system with a full It is of interest to compare the performance of pairs of operating system [7]. They concluded that the time processes executing with different degrees of resource spent executing in the kernel can have a large im- sharing. The scenarios are: pact on the speedup measurements compared to a Shared-memory symmetric multiprocessing user-mode only study. They report that the inclu- • (SMP) where the memory and its bus are the sion of OS effects on a SPECInt95 study has less im- main shared resources. pact on SMT performance measurements that it does on non-SMT superscalar results due to the better la- SMT with its fine grain sharing of all resources. tency hiding of SMT being able to mask the poorer • IPC of the kernel parts of the execution. This result ation of the cpus allowed patch1. This patch pro- is important as it means that a comparison of SMT vides an interface to the Linux cpus allowed task to superscalar without taking the OS into account attribute and allows the specification of which pro- would not do the SMT architecture justice. cessor(s) a process can be executed on. This is par- ticularly important as the scheduler in Linux 2.4.19 In a study of database performance on SMT proces- is not aware of Hyper-Threading. Use of this patch sors, Lo et al. noted that the large working set of prevented threads being migrated to another proces- this type of workload can reduce the benefit of using sor (logical or physical). A /proc file was added to SMT unless page placement policy is used to keep the allow lightweight access to the processor performance important \critical" working set in the cache [4]. counters. Grunwald and Ghiasi used synthetic workloads run- Details of the experimental machine are given in ta- ning on a Hyper-Threaded Pentium 4 to show that a ble 2. The machine contained two physical proces- malicious thread can cause a huge performance im- sors each having two logical (Hyper-Threaded) pro- pact to a concurrently running thread through care- cessors. Also included are details given by Tuck and ful targeting of shared resources [2]. Our work has a Tullsen for their experimental machine [12] to which few features in common with that of Grunwald and Intel gave them early access which would explain the Ghiasi but we are more interested in the mutual ef- non-standard clock speed and L2 cache combination. fects of non-malicious applications that may not have been designed or compiled with Hyper-Threading in For each pair of processes studied the following proce- mind. Some interesting results from this study were dure was used. Both processes were given a staggered the large impact of a trace-cache flush caused by self- start and each run continuously in a loop. The tim- modifying code, and of a pipeline flush caused by ings of runs were ignored until both processes had floating-point underflow. completed at least one run. The experiment contin- ued until both processes had accumulated 3 timed Vianney assessed the performance of Linux under runs. Note that the process with the shorter run- Hyper-Threading using a number of microbench- time will have completed more than 3 runs. This marks and compared the performance of some mul- method guaranteed that there were always two ac- tithreaded workloads running on a single processor tive processes and allowed the caches, including the with and without Hyper-Threading enabled [17].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    11 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us