Near-Memory Computing: Past, Present, and Future

Near-Memory Computing: Past, Present, and Future

1 Near-Memory Computing: Past, Present, and Future Gagandeep Singh∗, Lorenzo Chelini∗, Stefano Corda∗, Ahsan Javed Awany, Sander Stuijk∗, Roel Jordans∗, Henk Corporaal∗, Albert-Jan Boonstraz ∗Department of Electrical Engineering, Eindhoven University of Technology, Netherlands fg.singh, l.chelini, s.corda, s.stuijk, r.jordans, [email protected] yDepartment of Cloud Systems and Platforms, Ericsson Research, Sweden [email protected] zR&D Department, Netherlands Institute for Radio Astronomy, ASTRON, Netherlands [email protected] Abstract—The conventional approach of moving data to the a limited locality. On traditional systems, these applications CPU for computation has become a significant performance cause frequent data movement between the memory subsystem bottleneck for emerging scale-out data-intensive applications due and the processor that has a severe impact on performance to their limited data reuse. At the same time, the advancement in 3D integration technologies has made the decade-old concept and energy efficiency. Likewise, cluster computing frameworks of coupling compute units close to the memory — called near- like Apache Spark that enable distributed in-memory process- memory computing (NMC) — more viable. Processing right at ing of batch and streaming data also exhibit those mentioned the “home” of data can significantly diminish the data movement above behaviour [6]–[8]. Therefore, a lot of current research problem of data-intensive applications. is being focused on coming up with innovative manufacturing In this paper, we survey the prior art on NMC across various dimensions (architecture, applications, tools, etc.) and identify the technologies and architectures to overcome these problems. key challenges and open issues with future research directions. Today’s memory hierarchy usually consists of multiple We also provide a glimpse of our approach to near-memory levels of cache, the main memory, and storage. The traditional computing that includes i) NMC specific microarchitecture inde- approach is to move data up to caches from the storage and pendent application characterization ii) a compiler framework then process it. In contrast, near-memory computing (NMC) to offload the NMC kernels on our target NMC platform and iii) an analytical model to evaluate the potential of NMC. aims at processing close to where the data resides. This data- centric approach couples compute units close to the data and Index Terms—near-memory computing, data-centric comput- seek to minimize the expensive data movements. Notably, ing, modeling, computer architecture, application characteriza- tion, survey three-dimensional stacking is touted as the true enabler of processing close to the memory. It allows the stacking of logic and memory together using through-silicon via’s (TSVs) I. INTRODUCTION that helps in reducing memory access latency, power con- Over the years, memory technology has not been able to sumption and provides much higher bandwidth [9]. Micron’s keep up with advancements in processor technology in terms Hybrid Memory Cube (HMC) [10], High Bandwidth Memory of latency and energy consumption, which is referred to as (HBM) [11] from AMD and Hynix, and Samsung’s Wide the memory wall [1]. Earlier, system architects tried to bridge I/O [12] are the competing products in the 3D memory arena. this gap by introducing memory hierarchies that mitigated Figure 1 depicts the system evolution based on the infor- arXiv:1908.02640v1 [cs.AR] 7 Aug 2019 some of the disadvantages of off-chip DRAMs. However, the mation referenced by a program during execution, which is limited number of pins on the memory package is not able referred to as a working set [13]. Prior systems were based on to meet today’s bandwidth demands of multicore processors. a CPU-centric approach where data is moved to the core for Furthermore, with the demise of Dennard scaling [2], slowing processing (Figure 1 (a)-(c)), whereas now with near-memory of Moore’s law, and dark silicon computer performance has processing (Figure 1 (d)) the processing cores are brought to reached a plateau [3]. the place where data resides. Computation-in-memory (Figure At the same time, we are witnessing an enormous amount 1 (e)) paradigm aims at reducing data movement completely of data being generated across multiple areas like radio as- by using memories with compute capability (e.g. memristors, tronomy, material science, chemistry, health sciences etc [4]. phase-change memory (PCM)). In radio astronomy, for example, the first phase of the Square The paper aims to analyze and organize the extensive body Kilometre Array (SKA) aims at processing over 100 terabytes of literature related to the novel area of near-memory com- of raw data samples per second, yielding of the order of puting. Figure 2 shows a high-level view of our classification 300 petabytes of SKA data products annually [5]. The SKA that is based on the level in the memory hierarchy and further currently is in the design phase with anticipated construction split into the type of compute implementation (programmable, in South Africa and Australia in the first half of the coming fixed-function, or reconfigurable). Conceptually the approach decade. These radio-astronomy applications usually exhibit of near-memory computing can be applied to any level or type massive data parallelism and low operational intensity with of memory to improve the overall system performance. In our 2 Working set location Core Core Core 1 Core n Core Core cache L1 cache L1 cache cache cache L2 cache L2 cache L3 cache DRAM DRAM DRAM DRAM Core DRAM NVM NVM NVM NVM Core Computation-in Memory (a) Early (b) Single core with (c) Multi/many core with (d) Near-Memory (e) Computation-in Computers embedded cache L1, L2 and shared L3 Computing Memory cache Fig. 1: Classification of computing systems based on working set location, which is referred to as a working set [13]. Prior systems were based on a CPU-centric approach where data is moved to the core for processing (Figure 1 (a)- (c)), whereas now with near-memory processing (Figure 1 (d)) the processing cores are brought to the place where data resides. Computation-in-memory (Figure 1 (e)) further reduces data movement by using memories with compute capability (e.g. memristors, phase change memory). taxonomy, we do not include magnetic disk-based systems as clude an analytic model to illustrate the potential of near- they can no longer offer timely response due to the high access memory computing for memory intensive application latency and a high failure rate of disks [14]. Nevertheless, with a comparison to the current CPU-centric approach. there have been research efforts towards providing processing • We outline the directions for future research and highlight capabilities in the disk. However, it was not adopted widely current challenges. by the industry due to the marginal performance improvement This work extends our previous work [20] by including our that could not justify the associated cost [15], [16]. Instead, approach to near-memory computing (Section VIII). This out- we include emerging non-volatile memories termed as storage- lines our application characterization and compilation frame- class memories (SCM) [17] which are trying to fill the work. Furthermore, we evaluate more recent work in the NMC latency gap between DRAMs and disks. Building upon similar domain and provide new insights and research directions. The efforts [18], [19], we make the following contributions: remainder of this article is structured as follows: Section II • We analyze and organize the body of literature on near- provides a historical overview of near-memory computing memory computing under various dimensions. and related work. Section III outlines the evaluation and • We provide guidelines for design space exploration. classification scheme for NMC at main memory (Section IV) • We present our near-memory system outlining application and storage class memory (Section V). Section VI highlights characterization and compiler framework. Also, we in- the challenges with cache coherence, virtual memory, the lack of programming models, and data mapping schemes for NMC. In Section VII, we look into the tools and techniques used to perform the design space exploration for these systems. This section also illustrates the importance of application characterization. Section VIII presents our approach to near- memory computing. Additionally, we include a high-level analytic approach with a comparison to a traditional CPU- centric approach. Finally, Section IX highlights the lessons learned and future research directions. II. BACKGROUND AND RELATED WORK The idea of processing close to the memory dates back to the 1960s [21]; however, the first appearance of NMC systems can be traced back to the early 1990s [22]–[27]. An example was Vector IRAM (VIRAM) [28], where the researchers developed a vector processor with an on-chip embedded DRAM (eDRAM) to exploit data parallelism in Fig. 2: Processing options in the memory hierarchy high- multimedia applications. Although promising results were lighting the conventional compute-centric and the modern obtained, these NMC systems did not penetrate the market, data-centric approach and their adoption remained limited. One of the main reasons 3 Property Abbrev Description MM Main memory Hierarchy SCM Storage class memory HM Heterogenous memory C3D Commercial 3D memory DIMM Dual in-line memory module Memory PCM Phase change memory Type DRAM Dynamic random-access memory SSD Flash

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    16 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us