Hyper-Threading Aware Process Scheduling Heuristics

Hyper-Threading Aware Process Scheduling Heuristics

Hyper-Threading Aware Process Scheduling Heuristics James R. Bulpin and Ian A. Pratt University of Cambridge Computer Laboratory [email protected], http://www.cl.cam.ac.uk/netos Abstract a heavyweight thread (process); the OS and applications need not be aware that the logical processors are sharing Intel Corporation’s “Hyper-Threading Technology” is physical resources. However, some OS awareness of the the first commercial implementation of simultaneous processor hierarchy is desirable in order to avoid circum- multithreading. Hyper-Threading allows a single phys- stances such as a two physical processor system having ical processor to execute two heavyweight threads (pro- two runnable processes scheduled on the two logical pro- cesses) at the same time, dynamically sharing processor cessors of one package (and therefore sharing resources) resources. This dynamic sharing of resources, particu- while the other package remains idle. Current generation larly caches, causes a wide variety of inter-thread be- OSs such as Linux (version 2.6 and later versions of 2.4) haviour. Threads competing for the same resource can and Windows XP have this awareness. experience a low combined throughput. When processes share a physical processor the sharing Hyper-Threads are abstracted by the hardware as log- of resources, including the fetch and issue bandwidth, ical processors. Current generation operating systems means that they both run slower than they would do if are aware of the logical-physical processor hierarchy and they had exclusive use of the processor. In most cases are able to perform simple load-balancing. However, the the combined throughput of the processes is greater than particular resource requirements of the individual threads the throughput of either one of them running exclusively are not taken into account and sub-optimal schedules can — the system provides increased system-throughput at arise and remain undetected. the expense of individual processes’ throughput. The We present a method to incorporate knowledge of per- system-throughput “speedup” of running tasks using thread Hyper-Threading performance into a commod- Hyper-Threading compared to running them sequentially ity scheduler through the use of hardware performance is of the order of 20% [1, 9]. We have shown previously counters and the modification of dynamic priority. that there are a number of pathological combinations of workloads that can give a poor system-throughput 1 Introduction speedup or give a biased per-process throughput [1]. We argue that the operating system process scheduler can Simultaneous multithreaded (SMT) processors allow improve throughput by trying to schedule processes si- multiple threads to execute in parallel, with instructions multaneously that have a good combined throughput. We from multiple threads able to be executed during the use measurement of the processor to inform the sched- same cycle [10]. The availability of a large number of uler of the realized performance. instructions increases the utilization of the processor be- cause of the increased instruction-level parallelism. 2 Performance Estimation Intel Corporation introduced the first commercially In order to provide throughput-aware scheduling the OS available implementation of SMT to the Pentium 4 [2] needs to be able to quantify the current per-thread and processor as “Hyper-Threading Technology” [3, 5, 4]. system-wide throughput. It is not sufficient to measure Hyper-Threading is now common in server and desktop throughput as instructions per cycle (IPC) because pro- Pentium 4 processors and becoming available in the mo- cesses with natively low IPC would be misrepresented. bile version. The individual Hyper-Threads of a phys- We choose instead to express the throughput of a pro- ical processor are presented to the operating system as cess as a performance ratio specified as its rate of execu- logical processors. Each logical processor can execute tion under Hyper-Threading versus its rate of execution USENIX Association 2005 USENIX Annual Technical Conference 399 when given exclusive use of the processor. An appli- variable can be explained by the explanatory variables, cation that takes 60 seconds to execute in the exclusive was 66.5%, a reasonably good correlation considering mode and 100 seconds when running with another appli- this estimate is only to be used as a heuristic. The coeffi- cation under Hyper-Threading has a performance ratio cients for the explanatory variables are shown in Table 1 of 0.6. The system speedup of a pair of simultaneously along with the mean EPC for each variable (shown to put executing processes is defined as the sum of their per- the magnitudes of the variables into context). The fourth formance ratios. Two instances of the previous example column of the table indicates the importance of each running together would have a system speedup of 1.2 — counter in the model by multiplying the standard devi- the 20% Hyper-Threading speedup described above. ation of that metric by the coefficient; a higher absolute The performance ratio and system speedup metrics value here shows a counter that has a greater effect on both require knowledge of a process’ exclusive mode ex- the predicted performance. Calculation of the p-values ecution time and are based on the complete execution of showed the L2 miss rate of the background process to be the process. In a running system the former is not known statistically insignificant (in practical terms this metric is and the latter can only be known once the process has covered largely by the IPC of the background process). terminated by which time the knowledge is of little use. The experiment was repeated with a different subset of It is desirable to be able to estimate the performance ra- benchmark applications and the MLR model was used to tio of a process while it is running. We want to be able predict the performance ratio for each window. The co- to do this online using data from the processor hardware efficient of correlation between the estimated and mea- performance counters. A possible method is to look for a sured values was 0.853, a reasonably strong correlation. correlation between performance counter values and cal- We are investigating refinements to this model by con- culated performance; work on this estimation technique sidering other performance counters and input data. is ongoing, however, we present here a method used to derive a model for online performance ratio estimation 3 Scheduler Modifications using an analysis of a training workload set. Rather than design a Hyper-Threading aware scheduler Using a similar technique to our previous measure- from the ground up, we argue that gains can be made ment work [1] we executed pairs of SPEC CPU2000 by making modifications to existing scheduler designs. benchmark applications on the two logical processors of We wish to keep existing functionality such as starvation a Hyper-Threaded processor; each application running avoidance, static priorities and (physical) processor affin- in an infinite loop on its logical processor. Performance ity. A suitable location for Hyper-Threading awareness counter samples were taken at 100ms intervals with the would be in the calculation of dynamic priority; a candi- counters configured to record for each logical proces- date runnable process could be given a higher dynamic sor the cache miss rates for the L1 data, trace- and L2 priority if it is likely to perform well with the process caches, instructions retired and floating-point operations. currently executing on the other logical processor. A stand-alone base dataset was generated by executing Our implementation uses the Linux 2.4.19 kernel. the benchmark applications with exclusive use of the pro- Counter Coefficient Mean events Importance cessor, recording the number of instructions retired over per 1000 Cycles (coeff. x time. Execution runs were split into 100 windows of (to 3 S.F.) (to 3 S.F.) st.dev. equal instruction count. For each window the number of (Constant) 0.4010 processor cycles taken to execute that window under both TC-subj 29.7000 0.554 26.2 Hyper-Threading and exclusive mode were used to com- L2-subj 55.7000 1.440 87.2 pute a performance ratio for that window. The perfor- FP-subj 0.3520 52.0 29.8 mance counter samples from the Hyper-Threaded runs Insts-subj -0.0220 258 -4.3 were interpolated and divided by the cycle counts to give L1-subj 2.1900 10.7 15.4 events-per-cycle (EPC) data for each window. A set of 8 TC-back 32.7000 0.561 29.0 L2-back 1.5200 1.43 2.3 benchmark applications (integer and floating-point, cov- FP-back -0.4180 52.6 -35.3 ering a range of behaviours) were run in a cross-product Insts-back 0.5060 256 99.7 with 3 runs of each pair, leading to a total of 16,800 win- L1-back -3.5400 10.6 -25.3 dows each with EPC data for the events for both the ap- plication’s own, and the “background” logical processor. Table 1: Multiple linear regression coefficients for esti- A multiple linear regression analysis was performed us- mating the performance ratio of the subject process. The ing the EPC data as the explanatory variables and the ap- performance counters for the logical processor executing plication’s performance ratio as the dependent variable. the subject process are suffixed “subj” and those for the The coefficient of determination (the R2 value), an indi- background process’s logical processor, “back”. cation of how much of the variability of the dependent 400 2005 USENIX Annual Technical Conference USENIX Association This kernel has basic Hyper-Threading support in areas 1.45 native other than the scheduler. We modify the goodness() 1.4 basic function which is used to calculate the dynamic priority tryhard 1.35 plan for each runnable task when a scheduling decision is be- ipc p u 1.3 d ing made; the task with the highest goodness is executed.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us