Bank Stealing for Conflict Mitigation in GPGPU Register File

Bank Stealing for Conflict Mitigation in GPGPU Register File

Bank Stealing for Conflict Mitigation in GPGPU Register File Naifeng Jing, Shuang Chen, Shunning Jiang, Li Jiang, Chao Li, Xiaoyao Liang* Shanghai Jiao Tong University, Shanghai, China [email protected], [email protected] Abstract| Modern General Purpose Graphic Process- However, banks are not free. Adding banks requires ex- ing Unit (GPGPU) demands a large Register File (RF), tra column circuitry such as sensor amplifiers, pre-chargers, which is typically organized into multiple banks to support bitline drivers and a bundle of wires [2]. Meanwhile, the the massive parallelism. Although heavy banking benefits crossbar between banks and operand collectors grows sig- RF throughput, its associated area and energy costs with nificantly with the increased number of banks. They affect diminishing performance gains greatly limit future RF s- RF area, timing and power [3]. Because modern GPGPUs caling. In this paper, we propose an improved RF design tend to be area-limited as they have already been power- with a bank stealing technique, which enables a high R- limited [4], it is important to incorporate a suitable number F throughput with compact area. By deeply investigating of banks while seeking for more economic ways to mitigate the GPGPU microarchitecture, we identify the deficiency the conflicts for the ever-increasing RF capacity. in the state-of-the-art RF designs as the bank conflict prob- In this paper, we explicitly address the issue for an ef- lem, while the majority of conflicts can be eliminated lever- ficient RF design to sustain the capacity scaling in future aging the fact that the highly-banked RF oftentimes expe- GPGPUs. We observe many \bubbles" in bank utilization, riences under-utilization. This is especially true in GPGPU and apply fine-grained architectural control by stealing the where multiple ready warps are available at the scheduling idle banks to serve concurrent operand accesses, which ef- stage with their operands to be wisely coordinated. Our fectively recovers the throughput loss due to bank conflict- lightweight bank stealing technique can opportunistically s. By explicitly leveraging the multi-warp and multi-bank fill the idle banks for better operand service, and the aver- features in GPGPUs, we balance the utilization on the ex- age GPGPU performance can be improved under smaller isting RF resources instead of simply adding more banks. energy budget with significant area saving, which makes it Experimental results demonstrate that our proposed RF promising for sustainable RF scaling. with bank stealing can deliver better performance, less en- Keywords| GPGPU, register file, bank conflict, bank ergy and significant area saving. stealing, area efficiency The remainder of this paper is organized as follows. Sec- tion II reviews the background and previous work on GPG- PU RF. Section III introduces our motivation. Section IV I. Introduction proposes our techniques and architectural support. Section V and VI present the evaluation methodology and experi- By exploiting the parallelism using massive concurrent mental results. Finally, Section VII concludes this paper. threads, modern GPGPUs have emerged as pervasive al- ternatives for high performance computing. To hide the II. Background and Previous Work memory access latency, GPGPUs are featured by a zero- cost context switching, which is supported by a large-sized A. Baseline GPGPU and RF Architecture RF such that contexts of all the concurrent threads can be A modern GPGPU consists of a scalable array of multi- held in each Streaming Multiprocessor (SM). For instance, threaded SMs to enable the massive TLP. Threads are often the up-to-date Nvidia Kepler architecture is equipped with issued in a group of 32, called a warp. At any given clock 65,536 32-bit (256KB) registers to support 2,048 threads cycle, a ready warp is selected and issued by one sched- per SM. On top of that, the RF capacity keeps increasing uler. In Nvidia Fermi GPU, each SM has 2 schedulers, 32 at each product generation in seek of even higher thread- processing lanes with lots of execution units as in Fig. 1. level parallelism (TLP). The massive TLP challenges the RF design because the To manage such a large RF, a multi-banked structure is RF size is quite large and keeps increasing. Meanwhile, si- preferred, which can sustain the multi-ported RF through- multaneous multiple reads/writes are required. In Nvidia put by serving concurrent operand accesses as long as there PTX standard [5], each instruction can read up to 3 reg- are no bank conflicts. If conflict occurs, the requests are isters and write 1 register. Obviously, constructing such serialized and the RF throughput is reduced leading to per- a heavily-ported (6-read and 2-write ports for dual-issue) formance loss. As a result, in face of the RF capacity scal- large RF (hundreds of KB) is not feasible or economical. ing in future GPGPUs, adding more banks is a natural way To solve this problem, banked RF structure has been pro- for the increased registers without worsening the bank con- posed [6]. While various RF organizations are feasible, one flicts. Otherwise, the performance gained from higher TLP of the most commonly used is to combine multiple single- could be compromised because more threads from sched- ported banks into a monolithic RF for reduced power and ulers will compete for an insufficient number of RF banks, design complexity. Registers associated with warps are dis- aggravatingtheconflictsandreducingthethroughput. tributed across the banks [7] , and each bank has its own 978-1-4673-8009-6/15/$31.00 ©2015 IEEE 55 Symposium on Low Power Electronics and Design a design space exploration to experimentally study how Fetch Decode Issue Arbitrate Read Execute Writeback banking impacts the performance, area, delay and power L2 …… L1 Shared Cache memory Cache on a pilot RF design as described in Section 5. Warps I‐Cache Register File Const Texture …… cache cache Streaming Processor Performance. We vary the number of banks with a Streaming Processor Interconnect DRAM fixed RF capacity of 128KB per SM that is typical in Fermi- Streaming Processor like GPGPUs [6]. The performance results are averaged across all the simulated benchmarks as in Fig. 2(a). It Bank Bank Bank … Bank Bank arbitrator shows that adding banks benefits the performance signif- crossbar icantly for a small number of banks. With an increasing operand collector … number of banks, the benefit diminishes as the 16-bank RF WID Inst v r reg operand0 delivers almost the same performance as the 32-bank case v r reg operand1 v r reg operand2 due to the much reduced conflicts. The exploration reveals that 16 banks can be optimal for the 128KB RF. Fig. 1: Block diagram of a typical GPGPU pipeline, highlighting the banked RF with operand collectors and a crossbar unit. RF area. Fig. 2(b) plots the breakdown of the banking area in a 128KB RF, such as array cells, peripheral circuit- decoder, sense amplifiers, etc. to operate independently. ry and the crossbar. It clearly shows that more banks lead A typical banked RF is sketched in Fig. 1. For a sched- to significant area overhead. The area of array cells stays uled instruction to be dispatched, it first reserves a free constant across different schemes due to fixed RF capacity. operand collector. If fails, it has to wait till a collector In contrast, the area for the peripheral circuitry such as becomes available. The arbitrator distributes RF requests decoders, pre-chargers and equalizers, bitline multiplexers, from collectors to different banks but only allows a sin- sense amplifiers and output drivers, expands notably with gle access to the same bank at a time. Once bank con- an increasing number of banks. This is because a GPGPU flicts occur, the requests have to be serialized. Collectors register of 128B is much wider than that in CPU. This sig- can receive multiple operands via the crossbar concurrent- nificantly increases the number of bitlines and widens the ly. After all the operands are collected, the instruction is data buses. Each bank duplicates the peripheral circuitry dispatched to the execution unit and the hosting collector for independent array control, resulting in even larger area is released. Note that the bank conflicts directly affects expansion compared to conventional CPUs [13]. the operand fetching latency that cannot be hidden by the Meanwhile, the crossbar dimension between RF banks multiple warps after the issue stage and causes issue stalls and collectors grows significantly with the number of banks. at the beginning of the execution pipeline. For example, nearly 50% additional area is observed from a 16-bank to a 32-bank RF, although these two designs B. Previous Work on GPGPU RF Design deliver quite similar performance as shown in Fig. 2(a). There have been an amount of work for energy-efficient RF timing. Because we fix the RF capacity, more banks GPGPU RF designs, e.g. hybrid SRAM-DRAM [3], STT- naturally result in shorter access time in the memory array MRAM [8] and eDRAM-based RFs [9][10]. As these new as shown in Fig. 2(c). However, array access time is just a memory-based RFs compromises performance, these works portion of the total RF delay and the advantage is compro- focus on smart strategies to recover the performance loss. mised by the wire transfer time via a larger crossbar. With Architectural techniques are intensively studied to im- an increasing number of banks, the crossbar delay tends to prove RF performance or power. One study related to our dominate. However, even the worst-case scheme studied in work in CPU domain [11] proposes an area-efficient RF by this paper can meet the sub-1GHz GPGPU frequency, so reducing ports but leveraging buffer queues with pipeline we do not consider their timing differences.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us