Using Incorrect Speculation to Prefetch Data in a Concurrent Multithreaded Processor

Using Incorrect Speculation to Prefetch Data in a Concurrent Multithreaded Processor

Using Incorrect Speculation to Prefetch Data in a Concurrent Multithreaded Processor Ying Chen, Resit Sendag, and David J. Lilja Department of Electrical and Computer Engineering Minnesota Supercomputing Institute University of Minnesota 200 Union St. S.E., Minneapolis, MN 55455, USA {wildfire, rsgt, lilja}@ece.umn.edu Abstract that can be exploited in an application program. If a speculated control dependence turns out to be incorrect, the non-speculative head thread must kill all of its speculative successor threads. Concurrent multithreaded architectures exploit both With both instruction- and thread-level control speculation, a instruction-level and thread-level parallelism through a multithreaded architecture may issue many memory references combination of branch prediction and thread-level control which turn out to be unnecessary since they are issued from what speculation. The resulting speculative issuing of load subsequently is determined to be a mispredicted branch path or a instructions in these architectures can significantly impact the mispredicted thread. However, these incorrectly issued memory performance of the memory hierarchy as the system exploits references may produce an indirect prefetching effect by higher degrees of parallelism. In this study, we investigate the bringing data or instruction lines into the cache that are needed effects of executing the mispredicted load instructions on the later by correctly-executed threads and branch paths. cache performance of a scalable multithreaded architecture. We Existing superscalar processors with deep pipelines and wide show that the execution of loads from the wrongly-predicted issue units do allow memory references to be issued branch path within a thread, or from a wrongly-forked thread, speculatively down wrongly-predicted branch paths. However, can result in an indirect prefetching effect for later correctly- we go one step further and examine the effects of continuing to executed paths. By continuing to execute the mispredicted load execute the loads issued from both mispredicted branch paths instructions even after the instruction- or thread-level control and mispredicted threads even after the speculative operation is speculation is known to be incorrect, the cache misses for the known to be incorrect. We propose the Wrong Execution Cache correctly predicted paths and threads can be reduced, typically (WEC) to eliminate the potential cache pollution caused by by 42-73%. We introduce the small, fully-associative Wrong executing the wrong-path and wrong-thread loads. This work Execution Cache (WEC) to eliminate the potential pollution that shows that the execution of wrong-path or wrong-thread loads can be caused by the execution of the mispredicted load can produce a significant performance improvement with very instructions. Our simulation results show that the WEC can low overhead. improve the performance of a concurrent multithreaded In the remainder of the paper, Section 2 presents an overview architecture up to 18.5% on the benchmark programs tested, of the superthreaded architecture [2], which is the base with an average improvement of 9.7%, due to the reductions in architecture used for this study. Section 3 describes wrong the number of cache misses. execution loads and the implementation of the WEC in the base processor. Our experimental methodology is presented in Section 4 with the corresponding results given in Section 5. 1. Introduction Section 6 discusses some related work and Section 7 concludes. A concurrent multithreaded architecture [1] consists of a 2. The Superthreaded Architecture number of thread processing elements (superscalar cores) interconnected with some tightly-integrated communication 2.1. Base Architecture Model network [2]. Each superscalar processor core can use branch prediction to speculatively execute instructions beyond basic The superthreaded architecture (STA) [2] consists of multiple block-ending conditional branches. If a branch prediction thread processing units (TUs) with each TU connected to its ultimately turns out to be incorrect, the processor state must be successor by a unidirectional communication ring. Each TU has restored to the state prior to the predicted branch and execution its own private level-one (L1) instruction cache, L1 data cache, is restarted down the correct path. Simultaneously, a concurrent program counter, register file, and execution units. The TUs multithreaded architecture can aggressively fork speculative share the unified second-level (L2) cache. There also is a shared successor threads to further increase the amount of parallelism register file that maintains some global control and lock registers. A private memory buffer is used in each thread unit to computed addresses are stored in the memory buffer of each TU cache speculative stores for run-time data dependence checking. and are forwarded to the memory buffers of all succeeding When multiple threads are executing on an STA processor, the concurrent threads units. oldest thread in the sequential order is called the head thread and The computation stage performs the actual computation of the all other threads derived from it are called successor threads. loop iteration. If a cross-iteration dependence is detected by The program execution starts from its entry thread while all checking addresses in memory buffer [2], but the data has not other TUs are idle. When a parallel code region is encountered, yet arrived from the upstream thread, the out-of-order this thread activates its downstream thread by forking. This superscalar core will execute instructions that are independent of forking continues until there are no idle TUs. When all TUs are the load operation that is waiting for the upstream data value. busy, the youngest thread delays forking another thread until the In the write-back stage all the store data (including target head thread retires and its corresponding TU becomes idle. A stores) in the memory buffer will be committed and written to thread can be forked either speculatively or non-speculatively. A the cache memory. The write-back stages are performed in the speculatively forked thread will be aborted by its predecessor original program order to preserve non-speculative memory state thread if the speculative control dependence subsequently turns and to eliminate output and anti-dependences between threads. out to be false. 3. The Wrong Execution Cache (WEC) 2.2. Thread Pipelining Execution Model 3.1. Wrong Execution The execution model for the STA architecture is thread pipelining, which allows threads with data and control There are two types of wrong execution that can occur in a dependences to be executed in parallel. Instead of speculating on concurrent multithreaded architecture such as the STA data dependences, the thread execution model facilitates run- processor. The first type occurs when instructions continue to be time data dependence checking for load instructions. This issued down the path of what turns out to be an incorrectly- approach avoids the squashing of threads caused by data predicted conditional branch instruction within a single thread. dependence violations. It also reduces the hardware complexity We refer to this type of execution as wrong path execution. The of the logic needed to detect memory dependence violations second type of wrong execution occurs when instructions are compared to some other CMA execution models [3,4]. As executed from a thread that was speculatively forked, but is shown in Figure 1 the execution of a thread is partitioned into subsequently aborted. We refer to this type of incorrect the continuation stage, the target-store address-generation execution as wrong thread execution. Our interest in this study is (TSAG) stage, the computation stage, and the write-back stage. to examine the effects on the memory hierarchy of load instructions that are issued from both of these types of wrong Thread 1 executions. Continuation Fork & forward Stage continuation variables Thread 2 target store addr. TSAG Stage Continuation 3.1.1. Wrong Path Execution Stage Fork & forward TSAG_DONE flag continuation variables Thread 3 target store addr. Continuation Before a branch is resolved, some load instructions on TSAG Stage Fork & forward TSAG_DONE flag Stage continuation variables Computation wrongly-predicted branches may not be ready to be issued target store addr. Stage TSAG Stage because they are waiting either for the effective address to be target store TSAG_DONE flag addr.&data Computation calculated or for an available memory port. In wrong path Stage target store execution, however, they are allowed to access the memory Write-Back addr.&data Stage Computation system as soon as they are ready even though they are known to WB_DONE flag Stage target store Write-Back addr.&data be from the wrong path. These instructions are marked as being Stage WB_DONE flag from a wrong execution path when they are issued so they can Write-Back be squashed in the pipeline at the write-back stage. A wrong- Stage path load that is dependent upon another instruction that gets flushed after the branch is resolved also is flushed in the same Figure 1. Thread pipelining execution model cycle. No wrong-execution store instructions are allowed to alter the memory system since they are known to be invalid. The continuation stage computes recurrence variables (e.g. An example showing the difference between traditional loop index variables) needed to fork a new thread on the next speculative execution and our definition of wrong-path thread unit. This stage ends with a fork instruction, which execution is given in Figure 2. There are five loads (A, B, C, D, initiates a new speculative or non-speculative thread on the next and E) fetched down the predicted execution path. In a typical TU. An abort instruction is used to kill the successor threads pipelined processor, loads A and B become ready and are issued when it is determined that a speculative execution was incorrect. to the memory system speculatively before the branch is Note that the continuation stages of two adjacent threads can resolved. After the branch result is known to be wrong, however, never overlap.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us