Pressure-Driven Hardware Managed Thread Concurrency for Irregular Applications
Total Page:16
File Type:pdf, Size:1020Kb
Pressure-Driven Hardware Managed Thread Concurrency for Irregular Applications John D. Leidel Xi Wang Yong Chen Texas Tech University Texas Tech University Texas Tech University Box 43104 Box 43104 Box 43104 Lubbock, Texas 79409-3104 Lubbock, Texas 79409-3104 Lubbock, Texas 79409-3104 [email protected] [email protected] [email protected] ABSTRACT settings. The need for highly parallel algorithms and applications Given the increasing importance of ecient data intensive comput- that support machine learning, graph theory, articial intelligence ing, we nd that modern processor designs are not well suited to and raw data analytics have spurred a urry of research into novel the irregular memory access patterns found in these algorithms. hardware and software methodologies to accelerate these classes This research focuses on mapping the compiler’s instruction cost of applications. scheduling logic to hardware managed concurrency controls in One of the driving issues in coping with parallel, data intensive order to minimize pipeline stalls. In this manner, the hardware mod- computing algorithms and applications is the inherent irregularity ules managing the low-latency thread concurrency can be directly in the memory access patterns. These irregular memory access understood by modern compilers. We introduce a thread context patterns that often fall outside of traditional multi-level caching switching method that is managed directly via a set of hardware- hierarchies induce increased memory latencies that often stall the based mechanisms that are coupled to the compiler instruction execution pipelines of the core. These pipeline stalls prevent e- scheduler. As individual instructions from a thread execute, their cient execution, and thus increase runtime of the algorithms. respective cost is accumulated into a control register. Once the reg- Multi-threading and multi-tasking have been utilized in previous ister reaches a pre-determined saturation point, the thread is forced architectures [13][4][17][32] to cope with these periods of latency to context switch. We evaluate the performance benets of our by overlapping the memory accesses and execution of adjacent approach using a series of 24 benchmarks that exhibit performance threads or tasks. However, the operating system and system soft- acceleration of up to 14.6X. ware overhead required to manage this thread/task state, as well as to perform the fundamental context switch operations (context save CCS CONCEPTS and restore), consumes a signicant number of execution cycles. As a result, these purely software-driven methods often conict with • Computer systems organization → Multiple instruction, the fundamental goal of eciently lling the hardware’s execution multiple data; Multicore architectures; • Software and its engi- pipeline. neering → Source code generation; • Computing methodologies In this work, we present a novel methodology that couples an → Shared memory algorithms; Massively parallel algorithms; in-order, RISC execution pipeline to a low-latency context switch- KEYWORDS ing mechanism that stands to dramatically increase the eciency and, subsequently, the throughput of thread or task-parallel data Data intensive computing; irregular algorithms; thread concur- intensive computing algorithms and applications. At the core of our rency; context switching approach is a novel hardware-driven method that monitors the pres- ACM Reference format: sure a thread or task places on the execution pipeline by assigning John D. Leidel, Xi Wang, and Yong Chen. 2017. Pressure-Driven Hardware costs to individual opcode classes early in the instruction decode Managed Thread Concurrency for Irregular Applications. In Proceedings stage. When threads or tasks exceed a maximum cost threshold, of IA3’17: Seventh Workshop on Irregular Applications: Architectures and 3 they are forced to yield the execution pipeline to adjacent threads, Algorithms, Denver, CO, USA, November 12–17, 2017 (IA ’17), 8 pages. thus preventing signicant pipeline stalls due to latent execution DOI: 10.1145/3149704.3149705 events. The contributions of this research are as follows. We present a 1 INTRODUCTION methodology for constructing in-order RISC execution pipelines In recent years, data intensive computing has garnered increased that provides low latency context switching mechanisms between research and development attention in commercial and academic in-situ, hardware managed threads or tasks by monitoring the pres- sure placed on the pipeline by the executing instruction stack. The Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed methodology is applicable to general architectures, and we demon- for prot or commercial advantage and that copies bear this notice and the full citation strate a sample implementation of the GoblinCore-64 methodology on the rst page. Copyrights for components of this work owned by others than ACM using a classic ve-stage, in-order execution pipeline based upon must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specic permission and/or a the RISC-V Rocket micro architecture [31]. The end result exhibits fee. Request permissions from [email protected]. up to a 14.6X performance increase on a set of twenty-four well- IA3’17, Denver, CO, USA formed benchmark applications. © 2017 ACM. 978-1-4503-5136-2/17/11...$15.00 DOI: 10.1145/3149704.3149705 IA3’17, November 12–17, 2017, Denver, CO, USA J. Leidel et al. implemented a similar barrel processor technique as the original MTA/XMT, yet with four threads per core and a RISC pipeline. In addition, the Niagara employed a crossbar memory interface similar to that in the Convey MX-100 that connected all eight cores and two levels of cache to multiple memory controllers. The original Niagara-T1 was limited in its oating point throughput as a single oating point pipeline was shared amongst all eight cores. The Niagara-T2, however, implemented unique oating point pipelines for each core. Figure 1: GC64 Task Processor Architecture 3 HARDWARE MANAGED THREAD The remainder of this paper is organized as follows. Section 2 CONCURRENCY presents previous work in the area of hardware-driven context 3.1 Thread Context Management switching. Section 3 presents our methodology for performing The proposed pressure-driven, hardware managed thread concur- hardware-driven, low latency context switching. Section 4 presents rency and context switching features are generally applicable, and an evaluation of our methodology, including relevant application are also designed to be coupled to the GoblinCore-64 micro architec- benchmarks and we conclude with a summary of this work in ture [22]. The GoblinCore-64 (GC64) micro and system architecture Section 5. is designed to address data intensive computing algorithms and 2 PREVIOUS WORK applications by providing unique extensions to simple, in-order RISC-style computing cores that promote concurrency and mem- Several existing system architectures have explored numerous low- ory bandwidth utilization. In many cases, these workloads exhibit latency context switch mechanisms. One of the seminal architec- non-linear, random or irregular memory access patterns. As a result, tures to employ a low-latency context switch technique was the traditional multi-level cache approaches are insucient to prevent Tera MTA-2, which evolved into the Cray XMT/XMT-2 [6][25][5]. a high percentage of pipeline stalls when accessing memory in The MTA/XMT series of system architectures employed a method- irregular patterns. ology to store the entire state of thread, or in MTA/XMT vernacular, The GoblinCore-64 micro architecture provides a set of hier- a stream, in active processor state. For each execution core, the archical hardware modules that couple 64-bit, in-order RISC-V processor could store the register and state for up to 128 streams. cores [7][31][30] to a set of latency hiding and concurrency fea- Further, additional stream states could be stored in memory for tures. The hierarchical hardware modules are gathered into a system highly divergent, parallel applications. A cooperation between the on chip (SoC) device and connected to one or more Hybrid Memory hardware infrastructure and the software runtime infrastructure Cube (HMC) devices with a weakly ordered memory subsystem [3]. maintained the state of the streams for parallel applications [5]. The GC64 micro architecture uniquely encapsulates threads or Additionally, the Convey MX-100 CHOMP instruction set and tasks into divisible units in hardware. These Task Units are designed micro architecture employed overlapping thread state attached to represent all of the necessary register and control states required to a RISC pipeline as the basis for its high performance context to represent a thread or task in hardware. In this manner, a thread or switching [20]. Each RISC pipeline employs 64 thread cache units task’s context is saved to this internal storage following a context that represent the active thread state associated with a given core. switch event. No memory operations are otherwise required to Threads are context switched using a single cycle cadence and time service a context switch event. As a result, context switch operations division multiplexed with one another such that threads would can be performed in a single cycle. Figure 1 depicts all of the RISC-V