In-Memory Compute Using Off-The

In-Memory Compute Using Off-The

ComputeDRAM: In-Memory Compute Using Off-the-Shelf DRAMs Fei Gao Georgios Tziantzioulis David Wentzlaff [email protected] [email protected] [email protected] Department of Electrical Engineering Department of Electrical Engineering Department of Electrical Engineering Princeton University Princeton University Princeton University ABSTRACT IEEE/ACM International Symposium on Microarchitecture (MICRO-52), Octo- In-memory computing has long been promised as a solution to the ber 12–16, 2019, Columbus, OH, USA. ACM, New York, NY, USA, 14 pages. “Memory Wall” problem. Recent work has proposed using charge- https://doi.org/10.1145/3352460.3358260 sharing on the bit-lines of a memory in order to compute in-place and with massive parallelism, all without having to move data 1 INTRODUCTION across the memory bus. Unfortunately, prior work has required modification to RAM designs (e.g. adding multiple row decoders) In modern computing systems, moving data between compute re- in order to open multiple rows simultaneously. So far, the com- sources and main memory utilizes a large portion of the overall petitive and low-margin nature of the DRAM industry has made system energy and significantly contributes to program execution commercial DRAM manufacturers resist adding any additional time. As increasing numbers of processor cores have been inte- logic into DRAM. This paper addresses the need for in-memory grated onto a single chip, the amount of memory bandwidth has computation with little to no change to DRAM designs. It is the not kept up, thereby leading to a “Memory Wall” [48, 62]. Making first work to demonstrate in-memory computation with off-the- matters worse, the communication latency between compute re- shelf, unmodified, commercial, DRAM. This is accomplished by sources and off-chip DRAM has not improved as fast as the amount violating the nominal timing specification and activating multiple of computing resources have increased. rows in rapid succession, which happens to leave multiple rows To address these challenges, near-memory compute [3, 11, 29, 35, open simultaneously, thereby enabling bit-line charge sharing. We 55], Processors-in-Memory [19, 23, 27, 39, 52, 59], and in-memory use a constraint-violating command sequence to implement and compute [28, 33, 46] have all been proposed. This paper focuses demonstrate row copy, logical OR, and logical AND in unmodified, on the most aggressive solution, performing computations with commodity, DRAM. Subsequently, we employ these primitives to the memory. Unfortunately, performing computations with mem- develop an architecture for arbitrary, massively-parallel, compu- ory resources has relied on either emerging memory technolo- tation. Utilizing a customized DRAM controller in an FPGA and gies [14, 16, 60] or has required additional circuits be added to commodity DRAM modules, we characterize this opportunity in RAM arrays. While some solutions have been demonstrated in sili- hardware for all major DRAM vendors. This work stands as a proof con [1, 2, 45, 57], none of these solutions have gained widespread of concept that in-memory computation is possible with unmodi- industry adoption largely due to requiring additional circuits to be fied DRAM modules and that there exists a financially feasible way added to already cost optimized and low-margin RAM implementa- for DRAM manufacturers to support in-memory compute. tions. In this paper, we demonstrate a novel method that performs CCS CONCEPTS computation with off-the-shelf, unmodified, commercial DRAM. Utilizing a customized memory controller, we are able to • Computer systems organization → Parallel architectures; change the timing of standard DRAM memory transactions, oper- • Hardware → Dynamic memory. ating outside of specification, to perform massively parallel logical AND, logical OR, and row copy operations. Using these base op- KEYWORDS erations and by storing and computing each value and its logical DRAM, in-memory computing, bit-serial, main memory negation, we can compute arbitrary functions in a massively paral- lel bit-serial manner. Our novel operations function at the circuit ACM Reference Format: level by forcing the commodity DRAM chip to open multiple rows Fei Gao, Georgios Tziantzioulis, and David Wentzlaff. 2019. ComputeDRAM: simultaneously by activating them in rapid succession. Previous In-Memory Compute Using Off-the-Shelf DRAMs. In The 52nd Annual works [56, 57] have shown that with multiple rows opened, row copy and logical operations can be completed using bit-line charge Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed sharing. However, to achieve this, all previous techniques relied on for profit or commercial advantage and that copies bear this notice and the full citation hardware modifications. To our knowledge, this is the first workto on the first page. Copyrights for components of this work owned by others than the perform row copy, logical AND, and logical OR using off-the-shelf, author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission unmodified, commercial DRAM. and/or a fee. Request permissions from [email protected]. With a slight modification to the DRAM controller, in-memory MICRO-52, October 12–16, 2019, Columbus, OH, USA compute is able to save the considerable energy needed to move data © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-1-4503-6938-1/19/10...$15.00 across memory buses and cache hierarchies. We regard this work https://doi.org/10.1145/3352460.3358260 as an important proof of concept and indication that in-memory MICRO-52, October 12–16, 2019, Columbus, OH, USA Gao et al. computation using DRAM is practical, low-cost, and should be in- Memory CPU cluded in all future computer systems. Taking a step further, we Channels Controller characterize the robustness of each computational operation and Banks Chips columns identify which operations can be performed by which vendor’s Bit-line DRAM. While our results show that not all vendors’ off-the-shelf Global Bit-line line - line DRAMs support all operations, the virtue that we are able to find any Sub-array - Decoder Addr Word Word unmodified DRAMs that do support a complete set of operations Decoder Row Sub-array rows Row Row demonstrates that DRAM vendors can likely support in-memory Global Global Row Buffer compute in DRAM at little to no monetary cost. In-memory com- Global Col Addr Global I/O Sense Amplifier pute also shows potential as an alternative to using traditional Local Row Buffer silicon CMOS logic gates for compute. With the slowing and po- tential ending of Moore’s Law, the computing community has to Figure 1: DRAM hierarchy look beyond relying on non-scaling silicon CMOS logic transis- tors for alternatives such as relying more heavily on in-memory computation. channel, and up to two ranks per module. Each rank consists of This work has the potential for large impact on the computing eight physical chips, which are accessed in parallel and share the industry with minimal hardware design changes. Currently, DRAM same command bus. When memory is accessed, all chips see the is solely used to store data. Our discovery that off-the-shelf DRAM same address, and the data read out from different chips in one rank can be used to perform computations, as is, or with nominal mod- are concatenated. Each chip contains eight independent banks, and ifications by DRAM vendors, means that with a small updateto each bank is composed of rows and columns of DRAM memory the DRAM controller, all new computers can perform massively cells. A row refers to a group of storage cells that are activated parallel computations without needing to have data transit the simultaneously by a row activation command, and a column refers memory bus and memory hierarchy. This massively parallel bit- to the smallest unit of data (one byte per chip) that can be addressed. serial computation has the potential to save significant energy and Since it is impractical to use a single row decoder to address tens of can complete computations in-parallel with the main processor. thousands of rows in a bank, or to use long bit-lines to connect the We envision this type of computing being used for data-parallel cells in a bank vertically, the bank is further divided into sub-arrays, applications or sub-routines within a larger application. Prior work each of which contains 512 rows typically. Within a sub-array, each has even proposed using such architectures applied to image [53] bit-line connects all the cells in one bit of a column and the sense and signal [10] processing, computer vision [20], and the emerging amplifier at the local row buffer. field of neural networks [21]. As the processor (or I/O device) operates, it generates a stream of The main contributions of this work include: memory accesses (reads/writes), which are forwarded to the Mem- • This is the first work to demonstrate row copy in off-the- ory Controller (MC). The MC’s role is to receive memory requests, shelf, unmodified, commercial, DRAM. generate a sequence of DRAM commands for the memory system, • This is the first work to demonstrate bit-wise logical AND and issue these commands properly timed across the memory chan- and OR in off-the-shelf, unmodified, commercial, DRAM. nel. Deviation from the DRAM timing specification directly affects • We carefully characterize the capabilities and robustness the operation. The generated side-effects from non-nominal op- of the in-memory compute operations across DDR3 DRAM eration of a DRAM module can affect the stored data. It is these modules from all major DRAM vendors.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us