Dataflow-Architecture Co-Design for 2.5D DNN Accelerators Using Wireless Network-On-Package

Dataflow-Architecture Co-Design for 2.5D DNN Accelerators Using Wireless Network-On-Package

Dataflow-Architecture Co-Design for 2.5D DNN Accelerators using Wireless Network-on-Package Robert Guirado∗ Hyoukjun Kwon∗ Sergi Abadal [email protected] [email protected] [email protected] Universitat Politècnica de Catalunya Georgia Institute of Technology Universitat Politècnica de Catalunya Barcelona, Spain Atlanta, GA, USA Barcelona, Spain Eduard Alarcón Tushar Krishna [email protected] [email protected] Universitat Politècnica de Catalunya Georgia Institute of Technology Barcelona, Spain Atlanta, GA, USA Abstract At a fundamental level, DNN accelerators can either be Deep neural network (DNN) models continue to grow in size scaled up (i.e., adding more PEs) or scaled out (i.e., connecting and complexity, demanding higher computational power to multiple accelerator chips together). There is a natural limit enable real-time inference. To efficiently deliver such compu- to scale-up due to cost and yield issues. Scale-out is typically tational demands, hardware accelerators are being developed done via board-level integration which comes with over- and deployed across scales. This naturally requires an effi- heads of high communication latency and low bandwidth. cient scale-out mechanism for increasing compute density as However, the recent appeal of 2.5D integration of multiple required by the application. 2.5D integration over interposer chiplets interconnected via an interposer on the same pack- has emerged as a promising solution, but as we show in this age [15] (e.g., AMD’s Ryzen CPU), offers opportunities for work, the limited interposer bandwidth and multiple hops enabling efficient scale-out. This has proven effective inDNN in the Network-on-Package (NoP) can diminish the benefits accelerator domain via recent works [22, 30]. of the approach. To cope with this challenge, we propose Unfortunately, as we identify in this work, DNN accel- WIENNA, a wireless NoP-based 2.5D DNN accelerator. In erator chiplets have an insatiable appetite for data to keep WIENNA, the wireless NoP connects an array of DNN accel- the PEs utilized, which is a challenge for interposers due to erator chiplets to the global buffer chiplet, providing high- their limited bandwidth. Such a limitation originate from bandwidth multicasting capabilities. Here, we also identify large microbumps (∼40µm [15]) at the I/O ports of each the dataflow style that most efficienty exploits the wireless chiplet, which naturally reduces the bandwidth by orders NoP’s high-bandwidth multicasting capability on each layer. of magnitude compared to the nm pitch wires within the With modest area and power overheads, WIENNA achieves chiplet. The limited I/O bandwidth also causes chiplets to 2.2X–5.1X higher throughput and 38.2% lower energy than be typically connected to its neighbours only, hence making an interposer-based NoP design. the average number of intermediate chiplets (hops) between source and destination to increase with the chiplet count. 1 Introduction This introduces significant delay to data communication. Deep Neural Networks (DNN) are currently able to solve To address these challenges, for the first time, the oppor- a myriad of tasks with superhuman accuracy [13, 20]. To tunities provided by integrated wireless technology [2, 5, achieve these outstanding results, DNNs have become larger 10, 27] for the data distribution in 2.5D DNN accelerators arXiv:2011.14755v1 [cs.AR] 30 Nov 2020 and deeper, reaching upto billions of parameters. Due to are explored. We show that wireless links can (i) provide the enormous amount of calculations, the hardware running higher bandwidth than electrical interposers and (ii) natu- DNN inference has to be extremely energy efficient, a goal rally support broadcast. We then identify dataflow styles that that CPUs and even GPUs do not live up to easily. This has use these features to exploit the reuse opportunities across led to a fast development of specialized hardware. accelerator chiplets. Research in DNN accelerators [6, 17] is a bustling topic. This main contribution is WIENNA, WIreless-Enabled com- DNNs exhibit plenty of data reuse and parallelism opportuni- munications in Neural Network Accelerators, a 2.5D on-package ties that can be exploited via custom memory hierarchies and scale-out accelerator architecture that leverages single-cycle a sea of processing elements (PEs), respectively. However, as wireless broadcasts to distribute weights and inputs, depend- DNN models continue to scale, the compute capabilities of ing on the partitioning (i.e., dataflow) strategy. We imple- DNN accelerators need to scale as well. ment three partitioning strategies (batch, filter and activa- tion) for the scale-out architecture, leveraging the parallelism ∗Both authors contributed equally to this research. ASPDAC ’21, January 18–21, 2021, Tokyo, Japan R. Guirado and H. Kwon, et al. in DNNs across these three dimensions. We evaluate an in- 3.6 ) 3 stance of WIENNA with 256 NVDLA [1]-like accelerator 2 chiplets, and observe 2.5-4.4× throughput improvement and 2.4 average 38.2% energy reduction over a baseline 2.5D acceler- 1.8 Conservative fit 1.2 Pareto frontier ator that uses an electrical-only interposer. Area (mm 0.6 Datapoints 0 2 Background and Related Work 0 25 50 75 100 125 DNN Accelerators. A DNN accelerator is specialized hard- Datarate (Gbps) ware for running DNN algorithms, which provides higher 80 throughput and energy-efficiency than GPUs and CPUs via 60 parallelism and dedicated circuitry. The abstract architecture 40 of most DNN accelerators [6, 17, 18] consists of an off-chip Conservative fit Pareto frontier memory, a global shared memory, and an array of PEs con- (mW/cm) 20 Datapoints nected via a Network-on-Chip (NoC). Normalized power 0 When mapping the targeted DNN into the PEs, one can 0 20 40 60 80 100 apply different strategies or dataflows. Dataflows consist of Datarate (Gbps) three key components: loop ordering, tiling, and paralleliza- Figure 1. Transceiver area and power as functions of the datarate tion, which define how the dimensions are distributed across for [24, 26, 27] and references therein. Power is normalized to -9 the PEs to run in parallel. The dataflow space is huge and its transmission range and to 10 error rate. Energy per bit can be exploration is an active research topic [11, 12, 16, 28], as it obtained dividing the power by the datarate. directly determines the amount of data reuse and movement. Wireless Network-on-Package. Fully integrated on-chip DNN accelerators involve three phases of communica- antennas [5] and transceivers (TRXs) [26, 27] have appeared tion [17] handled by the NoC. (1) Distribution is a few-to- recently enabling short-range wireless communication upto many flow that involves sending inputs and weights tothe 100 Gb/s and with low resource overheads. Fig. 1 shows how PEs via unicasts or multicasts or broadcasts depending on the area and power scale with the datarate, based on the the partitioning strategy of the dataflow. (2) Local forward- analysis of 70+ short-range TRXs with different modulations ing involves data reuse within the PE array via inter-PE and technologies [24, 26, 27]. input/weight/partial-sum forwarding. (3) Collection involves Wireless NoPs are among the applications for this tech- writing the final outputs back to the global memory. nology. In a wireless NoP, processor or memory chiplets are 2.5D Integration. 2.5D systems refer to the integration provided with antennas and TRXs that are used to communi- of multiple discrete chiplets within a package, either over cate within the chiplet, or to other chiplets, using the system a silicon interposer [15] or other technologies. 2.5D inte- package as the propagation medium. This in-package chan- gration is promising due to higher yield (smaller chiplets nel is static and controlled and, thus, it can be optimized. In have better yields than large monolithic dies), reusability [25], it is shown that system-wide attenuation below 30 dB (chipletization can enable plug-and-play designs, enabling is achievable. These figures are compatible with the 65-nm IP reusability), and cost (it is more cost-effective to integrate CMOS TRX specs from [27]: 48 Gb/s, 1.95 pJ/bit at 25mm multiple chiplets than design a complex monolithic die). It distance with error rates below 10-12, and 0.8 mm2 of area. has been shown to be effective for DNN accelerators22 [ , 30]. By not needing to lay down wires between TRXs, wireless A silicon interposer is effectively a large chip upon which NoP offers scalable broadcast support and low latency across other smaller chiplets can be stacked. Thus, wires on the the system. It is scalable because additional chiplets only interposer can operate at the same latency as on-chip wires. need to incorporate a TRX, and not extra off-chip wiring, However, the micro-bumps used to connect the chiplet to to participate in the communication. The bandwidth is high the interposer are ∼40µm in state-of-the-art TSMC pro- because it is not limited by I/O pin constraints, and latency cesses [15]. This is much finer than board-level C4 bumps is low as transfers bypass intermediate hops. Wireless NoP (∼180µm), but much wider than that of NoCs. This limits also allows to dynamically change the topology via reconfig- the number of microbumps that fit over the chiplet area and, urable medium access control or network protocols [21], as thus, limits the effective inter-chiplet bandwidth. receivers can decide at run-time whether to process incom- In a scale-out DNN accelerator, limited interposer band- ing transfers. width slows down reads and writes from the global SRAM. While collection (i.e., write) can be hidden behind compute 3 Motivation delay, distribution (i.e., read) is in the critical path [22]. Pro- To determine the design requirements for a 2.5D fabric for visioning wires to enable fast distribution can be prohibitive accelerator scale-out, we assess the impact of the data distri- and, even then, multiple hops are unavoidable.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us