PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory Ping Chi∗, Shuangchen Li∗, Cong Xu†, Tao Zhang‡, Jishen Zhao§, Yongpan Liu¶,YuWang¶ and Yuan Xie∗ ∗Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106, USA †HP Labs, Palo Alto, CA 94304, USA; ‡NVIDIA Corporation, Santa Clara, CA 95950, USA §Department of Computer Engineering, University of California, Santa Cruz, CA 95064, USA ¶Department of Electronic Engineering, Tsinghua University, Beijing 100084, China ∗Email: {pingchi, shuangchenli, yuanxie}@ece.ucsb.edu Abstract—Processing-in-memory (PIM) is a promising so- by leveraging 3D memory technologies [6] to integrate lution to address the “memory wall” challenges for future computation logic with the memory. computer systems. Prior proposed PIM architectures put ad- Recent work demonstrated that some emerging non- ditional computation logic in or near memory. The emerging metal-oxide resistive random access memory (ReRAM) has volatile memories, such as metal-oxide resistive random showed its potential to be used for main memory. Moreover, access memory (ReRAM) [7], spin-transfer torque mag- with its crossbar array structure, ReRAM can perform matrix- netic RAM (STT-RAM) [8], and phase change memory vector multiplication efficiently, and has been widely studied to (PCM) [9], have the capability of performing logic and accelerate neural network (NN) applications. In this work, we arithmetic operations beyond data storage. This allows the propose a novel PIM architecture, called PRIME, to accelerate NN applications in ReRAM based main memory. In PRIME, memory to serve both computation and memory functions, a portion of ReRAM crossbar arrays can be configured as promising a radical renovation of the relationship between accelerators for NN applications or as normal memory for a computation and memory. Among them, ReRAM can per- larger memory space. We provide microarchitecture and circuit form matrix-vector multiplication efficiently in a crossbar designs to enable the morphable functions with an insignificant structure, and has been widely studied to represent synapses area overhead. We also design a software/hardware interface for software developers to implement various NNs on PRIME. in neural computation [10]–[15]. Benefiting from both the PIM architecture and the efficiency Neural network (NN) and deep learning (DL) have the of using ReRAM for NN computation, PRIME distinguishes potential to provide optimal solutions in various applications itself from prior work on NN acceleration, with significant including image/speech recognition and natural language performance improvement and energy saving. Our experimen- processing, and are gaining a lot of attention recently. tal results show that, compared with a state-of-the-art neural processing unit design, PRIME improves the performance by The state-of-the-art NN and DL algorithms, such as multi- ∼2360× and the energy consumption by ∼895×, across the layer perceptron (MLP) and convolutional neural network evaluated machine learning benchmarks. (CNN), require a large memory capacity as the size of NN Keywords-processing in memory; neural network; resistive increases dramatically (e.g., 1.32GB synaptic weights for random access memory Youtube video object recognition [16]). High-performance acceleration of NN requires high memory bandwidth since I. INTRODUCTION the PUs are hungry for fetching the synaptic weights [17]. To Conventional computer systems adopt separate processing address this challenge, recent special-purpose chip designs (CPUs and GPUs) and data storage components (memory, have adopted large on-chip memory to store the synaptic flash, and disks). As the volume of data to process has weights. For example, DaDianNao [18] employed a large skyrocketed over the last decade, data movement between on-chip eDRAM for both high bandwidth and data locality; the processing units (PUs) and the memory is becoming TrueNorth utilized an SRAM crossbar memory for synapses one of the most critical performance and energy bottle- in each core [19]. Although those solutions effectively necks in various computer systems, ranging from cloud reduce the transfer of synaptic weights between the PUs and servers to end-user devices. For example, the data transfer the off-chip memory, the data movement including input and between CPUs and off-chip memory consumes two orders of output data besides synaptic weights is still a hinderance magnitude more energy than a floating point operation [1]. to performance improvement and energy saving. Instead Recent progress in processing-in-memory (PIM) techniques of integrating more on-chip memory, PIM is a promising introduce promising solutions to the challenges [2]–[5], solution to tackle this issue by putting the computation logic ∗ into the memory chip, so that NN computation can enjoy the Shuangchen Li and Ping Chi have equal contribution. This work is supported in part by NSF 1461698, 1500848, and 1533933, and DOE grant large memory capacity and sustain high memory bandwidth DE-SC0013553, and a grant from Qualcomm. via in-memory data communication at the same time. Voltage Wordline In this work, we propose a novel PIM architecture for ef- LRS (‘1’) ficient NN computation built upon ReRAM crossbar arrays, HRS (‘0’) SET Top Electrode called PRIME, processing in ReRAM-based main memory. Cell Metal Oxide ReRAM has been proposed as an alternative to build the RESET Bottom Electrode next-generation main memory [20], and is also a good candidate for PIM thanks to its large capacity, fast read Voltage (a) (b) (c) speed, and computation capability. In our design, a portion of Figure 1. (a) Conceptual view of a ReRAM cell; (b) I-V curve of bipolar memory arrays are enabled to serve as NN accelerators be- switching; (c) schematic view of a crossbar architecture. sides normal memory. Our circuit, architecture, and software w1,1 a1 + b1 a1 interface designs allow these ReRAM arrays to dynamically w1,1 w1,2 w2,1 reconfigure between memory and accelerators, and also to w1,2 a2 w2,1 w2,2 a2 b2 represent various NNs. The current PRIME design supports w2,2 + b1 b2 large-scale MLPs and CNNs, which can produce the state- (a) (b) of-the-art performance on varieties of NN applications, e.g. Figure 2. (a) An ANN with one input/output layer; (b) using a ReRAM top classification accuracy for image recognition tasks. Dis- crossbar array for neural computation. tinguished from all prior work on NN acceleration, PRIME by changing cell resistances. The general definition does can benefit from both the efficiency of using ReRAM for not specify the resistive switching material. This work NN computation and the efficiency of the PIM architecture focuses on a subset of resistive memories, called metal- to reduce the data movement overhead, and therefore can oxide ReRAM, which uses metal oxide layers as switching achieve significant performance gain and energy saving. As materials. no dedicated processor is required, PRIME incurs very small Figure 1(a) demonstrates the metal-insulator-metal (MIM) area overhead. It is also manufacture friendly with low cost, structure of a ReRAM cell: a top electrode, a bottom elec- since it remains as the memory design without requirement trode, and a metal-oxide layer sandwiched between them [7]. for complex logic integration or 3D stacking. By applying an external voltage across it, a ReRAM cell can The contribution of this paper is summarized as follows: be switched between a high resistance state (HRS) and a low • We propose a ReRAM main memory architecture, resistance state (LRS), which are used to represent the logic which contains a portion of memory arrays (full func- “0” and “1”, respectively. tion subarrays) that can be configured as NN accelera- Figure 1(b) shows the I-V characteristics of a typical tors or as normal memory on demand. It is a novel PIM bipolar ReRAM cell. Switching a cell from HRS (logic solution to accelerate NN applications, which enjoys “0”) to LRS (logic “1”) is a SET operation, and the the advantage of in-memory data movement, and also reverse process is a RESET operation. To SET the cell, a the efficiency of ReRAM based computation. positive voltage that can generate sufficient write current • We design a set of circuits and microarchitecture to is required. To RESET the cell, a negative voltage with a enable the NN computation in memory, and achieve proper magnitude is necessary. The reported endurance of 12 the goal of low area overhead by careful design, e.g. ReRAM is up to 10 [21], [22], making the lifetime issue reusing the peripheral circuits for both memory and of ReRAM-based memory less concerned than PCM based computation functions. main memory whose endurance has been assumed between 106 108 • With practical assumptions of the technologies of us- - [23]. ing ReRAM crossbar arrays for NN computation, we An area-efficient array organization for ReRAM is cross- propose an input and synapse composing scheme to bar structure as shown in Figure 1(c) [24]. There are two overcome the precision challenge. common approaches to improve the density and reduce the • We develop a software/hardware interface that allows cost of ReRAM: multi-layer crossbar architecture [25]–[27] software developers to configure the full function subar- and multi-level cell (MLC) [28]–[30]. In MLC structure, rays to implement various NNs. We optimize NN map- ReRAM cells can store more than one bit of information ping during compile time, and exploit the bank-level in a single cell with various levels of resistance. This MLC parallelism of ReRAM memory for further acceleration. characteristic can be realized by changing the resistance of ReRAM cell gradually with finer write control. Recent work II. BACKGROUND AND RELATED WORK has demonstrated 7-bit MLC ReRAM [31]. This session presents the background and related work on Due to crossbar architecture’s high density, ReRAM has ReRAM basics, NN computation using ReRAM, and PIM.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-