
Speculation with Little Wasting: Saving Cost in Software Speculation through Transparent Learning YunlianJiang FengMao XipengShen Department of Computer Science The College of William and Mary, Williamsburg, VA, USA 23185 {jiang,fmao,xshen}@cs.wm.edu Abstract—Software speculation has shown promise in par- performance gain due to input-dependence make it difficult to allelizing programs with coarse-grained dynamic parallelism. justify the investment of time and the risk of errors of the However, most speculation systems use offline profiling for the manual efforts. selection of speculative regions. The mismatch with the input- sensitivity of dynamic parallelism may result in large numbers Software speculation has recently shown promising results of speculation failures in many applications. Although with in parallelizing such programs [4], [18]. The basic idea is to certain protection, the failed speculations may not hurt the basic dynamically create multiple speculative processes (or threads), efficiency of the application, the wasted computing resource (e.g. which each skips part of the program and speculatively exe- CPU time and power consumption) may severely degrade system cutes the next part. As those processes run simultaneously with throughput and efficiency. The importance of this issue continu- ously increases with the advent of multicore and parallelization the main process, their successes shorten the execution time. in portable devices and multiprogramming environments. But speculative executions may fail because of dependence In this work, we propose the use of transparent statisti- violations or being too slow to be profitable (elaborated in Sec- cal learning to make speculation cross-input adaptive. Across tion II.) In systems with no need for rollback upon speculation production runs of an application, the technique recognizes failures—such as the behavior-oriented parallelization (BOP) the patterns of the profitability of the speculative regions in the application and the relation between the profitability and system [4], failed speculations result in the waste of computing program inputs. On a new run, the profitability of the regions are resources (e.g., CPU and memory) and hence inferior comput- predicted accordingly and the speculations are switched on and ing efficiency. The waste is a serious concern especially for off adaptively. The technique differs from previous techniques in multi-programming or power-constrained environments (e.g., that it requires no explicit training, but is able to adapt to changes laptops, embedded systems.) For systems where rollback is in program inputs. It is applicable to both loop-level and function- level parallelism by learning across iterations and executions, necessary, an additional consequence is the degradation of permitting arbitrary depth of speculations. Its implementation program performance. in a recent software speculation system, namely the Behavior- Therefore, the avoidance of speculation failures is important Oriented Parallelization system, shows substantial reduction of for the cost efficiency of modern machines. Previous studies— speculation cost with negligible decrease (sometimes, considerable mostly in thread-level speculation—have tried to tackle this increase) of parallel execution performance. problem through profiling-based techniques (e.g., [6], [10], Index Terms—Adaptive Speculation, Behavior-Oriented Paral- lelization, Multicore, Transparent Learning [19].) The main idea is to determine the regions in a program that are most beneficial for speculation by profiling some training runs. I. INTRODUCTION The strategy, however, is often insufficient for coarse- Recent years have seen a rapid shift of processor technology grained software speculation, because of the input-sensitive to favor chip multiprocessors. Many existing programs, how- and dynamic properties of the parallelism. In a typical appli- ever, cannot fully utilize all CPUs in a system yet, even though cation handled by software speculation, the profitability (i.e., dynamic high-level parallelism exists in those programs. Ex- likelihood to succeed) of a speculative region often differs amples include a compression tool processing data buffer by among executions on different program inputs, or even among buffer, an English parser parsing sentence by sentence, and different phases of a single execution. The profiling-based an interpreter interpreting expression by expression, and so region selection can help, but unfortunately, is not enough for on. These programs are complex and may make extensive software speculation to adapt to the changes in program inputs use of bit-level operations, unrestricted pointers, exception and phases. handling, custom memory management, and third-party li- In a recent work [9], we proposed adaptive speculation to braries. The unknown data access and control flow make such address the limitations of the profiling-based strategy. The idea applications difficult if not impossible to parallelize in a fully is to predict the profitability of every instance of a specu- automatic manner. On the other hand, manual parallelization lative region online, and adapts the speculation accordingly. is a daunting task for complex programs, especially for those It departs from previous profiling-based techniques, which, pre-existing ones. Moreover, the complexity and the uncertain after selecting the speculative region (e.g., a loop), typically 1 ... ... speculate every instance (e.g., an iteration of a loop) of the while (1) { BeginPPR(1); region with no runtime adaptation. The proposed adaptive get_work(); work(x); speculation has successfully avoided unprofitable instances of ... EndPPR(1); speculative regions, and improved cost efficiency evidently [9]. BeginPPR(1); ... work(); BeginPPR(2); However, two limitations of the technique severely impair EndPPR(1); work(y); its applicability and scalability. First, it can only handle loop- ... EndPPR(2); level (case a in Figure 1) but not function-level speculations } ... (case b in Figure 1.) This limitation is inherent to the runtime (a) loop-level (b) function-level adaptation algorithm in the technique: Its prediction is based Fig. 1. The speculation unit in BOP is a possibly parallel region (PPR), on previous instances of a speculation region in the current labeled by BeginPPR(p) and EndPPR(p). The two examples illustrate the execution, whereas, in function-level speculation, the region two kinds of PPRs BOP handles. often have only few invocations in the entire execution, making the prediction difficult. Such regions, on the other hand, We implement both techniques in BOP [4], a recent software often compose a major portion of the applications that rely speculation system. Evaluations on a chip multiprocessor on function-level parallelism. Thus, failed speculations are machine demonstrate that the proposed techniques are effec- especially costly for those applications. tive in preventing unprofitable speculations without sacrificing In this work, we propose a cross-run transparent learning profitable ones. The techniques help BOP save a significant scheme to address the challenges facing the adaption of amount of cost, and meanwhile, cause little decrease but often function-level speculations. The new scheme differs from the increase to the program performance. The cost efficiency is previous technique [9] substantially: Rather than adapt purely enhanced significantly. upon history instances, it uses incremental machine learning In the rest of this paper, Section II gives a brief review techniques (Classification Trees) to uncover the relation be- of the BOP system. Section III-A describes the transparent tween program inputs and the profitability patterns of each learning for function-level speculation. Section III-B presents speculation region. With the new scheme, the profitability of the algorithm for scalable loop-level adaptive speculation. speculation regions can be predicted as soon as an execution Section IV reports evaluation results, followed by a discussion starts on arbitrary inputs. Developing such a scheme has to on related work and a short summary. overcome two difficulties. First, program inputs can be very II. REVIEW ON BOP complex with many options and attributes. The scheme has BOP creates one or more speculative processes to execute to extract the features that are important to the profitability some PPR instances in parallel with the execution of a lead of a region. Our approach employs an input characterization process. Figure 2 illustrates the run-time mechanism. The left technique to address that problem. The second challenge is part of the figure shows the sequential execution of three PPR to predict with confidence. Because our approach builds the instances, P , Q, and R (which can be the instances of either a predictive model incrementally, it is important to determine single PPR or different PPRs). The right part shows the parallel whether the current model is accurate enough for use. Our execution enabled by BOP. The execution starts with a single design addresses the problem by integrating self-evaluation process, named lead process. When the lead process reaches into the learning process. b the start marker of P , mP , it forks the first speculative process, The second limitation of the previously proposed adaptive spec 1, and then continues to execute the first PPR instance. speculation is that it is not scalable: The speculation depth Spec 1 jumps to the end marker of P and executes from there. (i.e., the number of speculative
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-