A Survey on Coarse-Grained Reconfigurable Architectures from a Performance Perspective

Total Page:16

File Type:pdf, Size:1020Kb

A Survey on Coarse-Grained Reconfigurable Architectures from a Performance Perspective Received May 31, 2020, accepted July 13, 2020, date of publication July 27, 2020, date of current version August 20, 2020. Digital Object Identifier 10.1109/ACCESS.2020.3012084 A Survey on Coarse-Grained Reconfigurable Architectures From a Performance Perspective ARTUR PODOBAS 1,2, KENTARO SANO1, AND SATOSHI MATSUOKA1,3 1RIKEN Center for Computational Science, Kobe 650-0047, Japan 2Department of Computer Science, KTH Royal Institute of Technology, 114 28 Stockholm, Sweden 3Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, Tokyo 152-8550, Japan Corresponding author: Artur Podobas ([email protected]) This work was supported by the New Energy and Industrial Technology Development Organization (NEDO). ABSTRACT With the end of both Dennard’s scaling and Moore’s law, computer users and researchers are aggressively exploring alternative forms of computing in order to continue the performance scaling that we have come to enjoy. Among the more salient and practical of the post-Moore alternatives are reconfigurable systems, with Coarse-Grained Reconfigurable Architectures (CGRAs) seemingly capable of striking a balance between performance and programmability. In this paper, we survey the landscape of CGRAs. We summarize nearly three decades of literature on the subject, with a particular focus on the premise behind the different CGRAs and how they have evolved. Next, we compile metrics of available CGRAs and analyze their performance properties in order to understand and discover knowledge gaps and opportunities for future CGRA research specialized towards High-Performance Computing (HPC). We find that there are ample opportunities for future research on CGRAs, in particular with respect to size, functionality, support for parallel programming models, and to evaluate more complex applications. INDEX TERMS Coarse-grained reconfigurable architectures, CGRA, FPGA, computing trends, reconfig- urable systems, high-performance computing, post-Moore. I. INTRODUCTION dynamically configurable. A reconfigurable system can, for With the end of Dennard’s scaling [1] and the looming threat example, mimic a processor architecture for some time (e.g., that even Moore’s law [2] is about to end [3], computing is a RISC-V core [7]), and then be changed to mimic an LTE perhaps facing its most challenging moments. Today, com- baseband station [8]. This property of reconfigurability is puter researchers and practitioners are aggressively pursuing highly sought after, since it can mitigate the end of Moore’s and exploring alternative forms of computing in order to law to some extent– we do not need more transistors, we just try to fill the void that an end of Moore’s law would leave need to spatially configure the silicon to match the computa- behind. There are already a plethora of emerging and intru- tion in time. sive technologies with the promise of overcoming the limits Recently, a particular branch of reconfigurable architec- of technology scaling, such as quantum- or neuromorphic- ture – the Field-Programmable Gate Arrays (FPGAs) [9] computing [4], [5]. However, not all Post-Moore architectures – has experienced a surge of renewed interest for use in are intrusive, and some merely require us to step away from High-Performance Computing (HPC), and recent research the comforts that von-Neumann architecture offers. Among has shown performance- or power-benefits for multiple appli- the more salient of these technologies are reconfigurable cations [10]–[14]. At the same time, many of the limitations architectures [6]. that FPGAs have, such as slow configuration times, long Reconfigurable architectures are systems that attempt to compilations times, and (comparably) low clock frequencies, retain some of the silicon plasticity that an ASIC solution remain unsolved. These limitations have been recognized for usually throws away. These systems – at least conceptually decades (e.g., [15]–[17]), and have driven forth a different – allow the silicon to be malleable and its functionality branch of reconfigurable architecture: the Coarse-Grained Reconfigurable Architecture (CGRAs). The associate editor coordinating the review of this manuscript and CGRAs trade some of the flexibility that FPGAs have approving it for publication was Nitin Nitin . to solve their limitations. A CGRA can operate at higher This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ VOLUME 8, 2020 146719 A. Podobas et al.: Survey on CGRAs From a Performance Perspective clock frequencies, can provide higher theoretical compute (usually 4-6) and produced an output (and its complement) performance, can drastically reduce compilation times, and as a function of the SRAM content and their inputs. Hence, – perhaps most importantly – reduce reconfiguration time depending on the sought-after functionality to be simulated, substantially. While CGRAs have traditionally been used in LUTs could be configured and – through a highly reconfig- embedded systems (particular for media-processing), lately, urable interconnect – could be connected to each other, finally they too are considered for HPC. Even traditional FPGA yielding the expected designs. The design would naturally run vendors such as Xilinx [18] and Intel [19] are creating and/or between one and two orders of magnitude lower frequency investigating to coarsen their existing reconfigurable archi- (for example, there is a 37× reduction in frequency when run- tecture to complement other forms of computing. ning Intel Atom on an FPGA [21]) than the final standard-cell In this paper, we survey the literature of CGRAs, summa- ASIC, but would nevertheless be an invaluable prototyping rizing the different architectures and systems that have been tool. introduced over time. We complement surveys written by our By the early 1990s, FPGAs had already found other uses peers by focusing on understanding the trends in performance (aside from digital development) within telecommunication, that CGRAs have been experiencing, providing insights into military, and automobile industries—the FPGA was seen as a where the community is moving, and any potential gaps in compute device in its own right and there were some aspira- knowledge that can/should be filled. tions to use it for general-purpose computing and not only in The contributions of our work are as follows: the niche market of prototyping digital designs. Despite this, • A survey over three decades of Coarse-Grained Recon- several limitations of FPGAs were quickly identified that pro- figurable Architectures, summarizing existing architec- hibited coverage of a wide range of applications. For exam- ture types and properties, ple, unlike software compilation tools that take minutes to • A quantitative analysis over performance metrics of compile applications, the FPGA Electronic Design Automa- CGRA architectures as reported in their respective tion (EDA) flow took significantly longer, often requiring papers, and hours or even days of compilation time. Similarly, if the • An analysis on trends and observations regarding expected application could not fit a single device, the long CGRAs with discussion reconfiguration overhead (the time it takes to program the The remaining paper is organized in the following way. FPGA) demotivated time-sharing or context-switching of its Section II introduces the motivation behind CGRAs, as well resources. Another limitation was that some important arith- as their generic design for the unfamiliar reader. Section III metic operators did not map well to the FPGA; for exam- positions this survey against existing surveys on the topic. ple, a single integer multiplication could often consume a Section IV quantitatively summarizes each architecture that large fraction of the FPGA resources. Finally, FPGAs were we reviewed, describing key characteristics and the premise relatively slow, running at a low clock frequency. Many behind each respective architecture. Section V analyzes of these challenges and limitations of applying FPGAs for the reviewed architecture from different perspectives (Sec- general-purpose computing still hold to this day. tions VII, VIII, and VI), which we finally discuss at the end Early reconfigurable computing pioneers looked at the of the paper in section IX. limitations of FPGAs and considered what would happen if one would increase the granularity at which it was pro- II. INTRODUCTION TO CGRAs grammed. By increasing the granularity, larger and more Before summarizing the now three decades of Coarse-Grained specialized units could be built, which would increase the Reconfigurable Architecture (CGRA) research, we start by performance (clock frequency) of the device. Also, since describing the main aspirations and motivations behind them. the larger units require less configuration state, reconfig- To do so, we need to look at the predecessor of the CGRAs: uring the device would be significantly faster, allowing The Field-Programmable Gate Array (FPGA). fine-grained time-sharing (multiple contexts) of the device. FPGAs are devices that were developed to reduce the Finally, by coarsening the units of reconfiguration, one would cost of simulation and developing Application-Specific Inte- include those units that map poorly on FPGAs into the fab- grated Circuits (ASICs). Because any bug/fault that was ric (e.g., multiplications), making better use of the silicon left undiscovered post ASIC tape-out would incur a (poten- and increasing generality of the device. These new
Recommended publications
  • Parallel Patterns for Adaptive Data Stream Processing
    Università degli Studi di Pisa Dipartimento di Informatica Dottorato di Ricerca in Informatica Ph.D. Thesis Parallel Patterns for Adaptive Data Stream Processing Tiziano De Matteis Supervisor Supervisor Marco Danelutto Marco Vanneschi Abstract In recent years our ability to produce information has been growing steadily, driven by an ever increasing computing power, communication rates, hardware and software sensors diffusion. is data is often available in the form of continuous streams and the ability to gather and analyze it to extract insights and detect patterns is a valu- able opportunity for many businesses and scientific applications. e topic of Data Stream Processing (DaSP) is a recent and highly active research area dealing with the processing of this streaming data. e development of DaSP applications poses several challenges, from efficient algorithms for the computation to programming and runtime systems to support their execution. In this thesis two main problems will be tackled: • need for high performance: high throughput and low latency are critical re- quirements for DaSP problems. Applications necessitate taking advantage of parallel hardware and distributed systems, such as multi/manycores or cluster of multicores, in an effective way; • dynamicity: due to their long running nature (24hr/7d), DaSP applications are affected by highly variable arrival rates and changes in their workload charac- teristics. Adaptivity is a fundamental feature in this context: applications must be able to autonomously scale the used resources to accommodate dynamic requirements and workload while maintaining the desired Quality of Service (QoS) in a cost-effective manner. In the current approaches to the development of DaSP applications are still miss- ing efficient exploitation of intra-operator parallelism as well as adaptations strategies with well known properties of stability, QoS assurance and cost awareness.
    [Show full text]
  • A Middleware for Efficient Stream Processing in CUDA
    Noname manuscript No. (will be inserted by the editor) A Middleware for E±cient Stream Processing in CUDA Shinta Nakagawa ¢ Fumihiko Ino ¢ Kenichi Hagihara Received: date / Accepted: date Abstract This paper presents a middleware capable of which are expressed as kernels. On the other hand, in- out-of-order execution of kernels and data transfers for put/output data is organized as streams, namely se- e±cient stream processing in the compute uni¯ed de- quences of similar data records. Input streams are then vice architecture (CUDA). Our middleware runs on the passed through the chain of kernels in a pipelined fash- CUDA-compatible graphics processing unit (GPU). Us- ion, producing output streams. One advantage of stream ing the middleware, application developers are allowed processing is that it can exploit the parallelism inherent to easily overlap kernel computation with data trans- in the pipeline. For example, the execution of the stages fer between the main memory and the video memory. can be overlapped with each other to exploit task par- To maximize the e±ciency of this overlap, our middle- allelism. Furthermore, di®erent stream elements can be ware performs out-of-order execution of commands such simultaneously processed to exploit data parallelism. as kernel invocations and data transfers. This run-time One of the stream architecture that bene¯t from capability can be used by just replacing the original the advantages mentioned above is the graphics pro- CUDA API calls with our API calls. We have applied cessing unit (GPU), originally designed for acceleration the middleware to a practical application to understand of graphics applications.
    [Show full text]
  • AMD Accelerated Parallel Processing Opencl Programming Guide
    AMD Accelerated Parallel Processing OpenCL Programming Guide November 2013 rev2.7 © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur.
    [Show full text]
  • Performance and Energy Efficient Network-On-Chip Architectures
    Linköping Studies in Science and Technology Dissertation No. 1130 Performance and Energy Efficient Network-on-Chip Architectures Sriram R. Vangal Electronic Devices Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden Linköping 2007 ISBN 978-91-85895-91-5 ISSN 0345-7524 ii Performance and Energy Efficient Network-on-Chip Architectures Sriram R. Vangal ISBN 978-91-85895-91-5 Copyright Sriram. R. Vangal, 2007 Linköping Studies in Science and Technology Dissertation No. 1130 ISSN 0345-7524 Electronic Devices Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden Linköping 2007 Author email: [email protected] Cover Image A chip microphotograph of the industry’s first programmable 80-tile teraFLOPS processor, which is implemented in a 65-nm eight-metal CMOS technology. Printed by LiU-Tryck, Linköping University Linköping, Sweden, 2007 Abstract The scaling of MOS transistors into the nanometer regime opens the possibility for creating large Network-on-Chip (NoC) architectures containing hundreds of integrated processing elements with on-chip communication. NoC architectures, with structured on-chip networks are emerging as a scalable and modular solution to global communications within large systems-on-chip. NoCs mitigate the emerging wire-delay problem and addresses the need for substantial interconnect bandwidth by replacing today’s shared buses with packet-switched router networks. With on-chip communication consuming a significant portion of the chip power and area budgets, there is a compelling need for compact, low power routers. While applications dictate the choice of the compute core, the advent of multimedia applications, such as three-dimensional (3D) graphics and signal processing, places stronger demands for self-contained, low-latency floating-point processors with increased throughput.
    [Show full text]
  • Computer Architecture: Dataflow (Part I)
    Computer Architecture: Dataflow (Part I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture n These slides are from 18-742 Fall 2012, Parallel Computer Architecture, Lecture 22: Dataflow I n Video: n http://www.youtube.com/watch? v=D2uue7izU2c&list=PL5PHm2jkkXmh4cDkC3s1VBB7- njlgiG5d&index=19 2 Some Required Dataflow Readings n Dataflow at the ISA level q Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. q Arvind and Nikhil, “Executing a Program on the MIT Tagged- Token Dataflow Architecture,” IEEE TC 1990. n Restricted Dataflow q Patt et al., “HPS, a new microarchitecture: rationale and introduction,” MICRO 1985. q Patt et al., “Critical issues regarding HPS, a high performance microarchitecture,” MICRO 1985. 3 Other Related Recommended Readings n Dataflow n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Lee and Hurson, “Dataflow Architectures and Multithreading,” IEEE Computer 1994. n Restricted Dataflow q Sankaralingam et al., “Exploiting ILP, TLP and DLP with the Polymorphous TRIPS Architecture,” ISCA 2003. q Burger et al., “Scaling to the End of Silicon with EDGE Architectures,” IEEE Computer 2004. 4 Today n Start Dataflow 5 Data Flow Readings: Data Flow (I) n Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. n Treleaven et al., “Data-Driven and Demand-Driven Computer Architecture,” ACM Computing Surveys 1982. n Veen, “Dataflow Machine Architecture,” ACM Computing Surveys 1986. n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Arvind and Nikhil, “Executing a Program on the MIT Tagged-Token Dataflow Architecture,” IEEE TC 1990.
    [Show full text]
  • Fine-Grained Window-Based Stream Processing on CPU-GPU Integrated
    FineStream: Fine-Grained Window-Based Stream Processing on CPU-GPU Integrated Architectures Feng Zhang and Lin Yang, Renmin University of China; Shuhao Zhang, Technische Universität Berlin and National University of Singapore; Bingsheng He, National University of Singapore; Wei Lu and Xiaoyong Du, Renmin University of China https://www.usenix.org/conference/atc20/presentation/zhang-feng This paper is included in the Proceedings of the 2020 USENIX Annual Technical Conference. July 15–17, 2020 978-1-939133-14-4 Open access to the Proceedings of the 2020 USENIX Annual Technical Conference is sponsored by USENIX. FineStream: Fine-Grained Window-Based Stream Processing on CPU-GPU Integrated Architectures Feng Zhang1, Lin Yang1, Shuhao Zhang2;3, Bingsheng He3, Wei Lu1, Xiaoyong Du1 1Key Laboratory of Data Engineering and Knowledge Engineering (MOE), and School of Information, Renmin University of China 2DIMA, Technische Universität Berlin 3School of Computing, National University of Singapore [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract GPU memory via PCI-e before GPU processing, but the low Accelerating SQL queries on stream processing by utilizing bandwidth of PCI-e limits the performance of stream process- heterogeneous coprocessors, such as GPUs, has shown to be ing on GPUs. Hence, stream processing on GPUs needs to be an effective approach. Most works show that heterogeneous carefully designed to hide the PCI-e overhead. For example, coprocessors bring significant performance improvement be- prior works have explored pipelining the computation and cause of their high parallelism and computation capacity.
    [Show full text]
  • Lightsaber: Efficient Window Aggregation on Multi-Core Processors
    Research 28: Stream Processing SIGMOD ’20, June 14–19, 2020, Portland, OR, USA LightSaber: Efficient Window Aggregation on Multi-core Processors Georgios Theodorakis Alexandros Koliousis∗ Peter Pietzuch Imperial College London Graphcore Research Holger Pirk [email protected] [email protected] [email protected] [email protected] Imperial College London Abstract Invertible Aggregation (Sum) Non-Invertible Aggregation (Min) 200 Window aggregation queries are a core part of streaming ap- 140 150 Multiple TwoStacks 120 tuples/s) Multiple TwoStacks plications. To support window aggregation efficiently, stream 6 Multiple SoE 100 SlickDeque SlickDeque 80 100 SlideSide SlideSide processing engines face a trade-off between exploiting par- 60 allelism (at the instruction/multi-core levels) and incremen- 50 40 20 tal computation (across overlapping windows and queries). 0 0 Throughput (10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Existing engines implement ad-hoc aggregation and par- # Queries # Queries allelization strategies. As a result, they only achieve high Figure 1: Evaluating window aggregation queries performance for specific queries depending on the window definition and the type of aggregation function. an order of magnitude higher throughput compared to ex- We describe a general model for the design space of win- isting systems—on a 16-core server, it processes 470 million dow aggregation strategies. Based on this, we introduce records/s with 132 µs average latency. LightSaber, a new stream processing engine that balances parallelism and incremental processing when executing win- CCS Concepts dow aggregation queries on multi-core CPUs.
    [Show full text]
  • Parallel Stream Processing with MPI for Video Analytics and Data Visualization
    Parallel Stream Processing with MPI for Video Analytics and Data Visualization Adriano Vogel1 [0000−0003−3299−2641], Cassiano Rista1, Gabriel Justo1, Endrius Ewald1, Dalvan Griebler1;3[0000−0002−4690−3964], Gabriele Mencagli2, and Luiz Gustavo Fernandes1[0000−0002−7506−3685] 1 School of Technology, Pontifical Catholic University of Rio Grande do Sul, Porto Alegre - Brazil [email protected] 2 Department of Computer Science, University of Pisa, Pisa - Italy 3 Laboratory of Advanced Research on Cloud Computing (LARCC), Tr^esde Maio Faculty (SETREM), Tr^esde Maio - Brazil Abstract. The amount of data generated is increasing exponentially. However, processing data and producing fast results is a technological challenge. Parallel stream processing can be implemented for handling high frequency and big data flows. The MPI parallel programming model offers low-level and flexible mechanisms for dealing with distributed ar- chitectures such as clusters. This paper aims to use it to accelerate video analytics and data visualization applications so that insight can be ob- tained as soon as the data arrives. Experiments were conducted with a Domain-Specific Language for Geospatial Data Visualization and a Person Recognizer video application. We applied the same stream par- allelism strategy and two task distribution strategies. The dynamic task distribution achieved better performance than the static distribution in the HPC cluster. The data visualization achieved lower throughput with respect to the video analytics due to the I/O intensive operations. Also, the MPI programming model shows promising performance outcomes for stream processing applications. Keywords: parallel programming, stream parallelism, distributed pro- cessing, cluster 1 Introduction Nowadays, we are assisting to an explosion of devices producing data in the form of unbounded data streams that must be collected, stored, and processed in real-time [12].
    [Show full text]
  • Configurable Fine-Grain Protection for Multicore Processor Virtualization 1
    Configurable Fine-Grain Protection for Multicore Processor Virtualization David Wentzlaff1, Christopher J. Jackson2, Patrick Griffin3, and Anant Agarwal2 [email protected], [email protected], griffi[email protected], [email protected] 1Princeton University 2Tilera Corp. 3Google Inc. Abstract TLB Access DMA Engine “User” Network I/O Network 2 2 2 2 1 3 1 3 1 3 1 3 Multicore architectures, with their abundant on-chip re- 0 0 0 0 sources, are effectively collections of systems-on-a-chip. The protection system for these architectures must support Key: 0 – User Code, 1 – OS, 2 – Hypervisor, 3 – Hypervisor Debugger multiple concurrently executing operating systems (OSes) Figure 1. With CFP, system software can dy• with different needs, and manage and protect the hard- namically set the privilege level needed to ac• ware’s novel communication mechanisms and hardware cess each fine•grain processor resource. features. Traditional protection systems are insufficient; they protect supervisor from user code, but typically do not protect one system from another, and only support fixed as- ticore systems, a protection system must both temporally signment of resources to protection levels. In this paper, protect and spatially isolate access to resources. Spatial iso- we propose an alternative to traditional protection systems lation is the need to isolate different system software stacks which we call configurable fine-grain protection (CFP). concurrently executing on spatially disparate cores in a mul- CFP enables the dynamic assignment of in-core resources ticore system. Spatial isolation is especially important now to protection levels. We investigate how CFP enables differ- that multicore systems have directly accessible networks ent system software stacks to utilize the same configurable connecting cores to other cores and cores to I/O devices.
    [Show full text]
  • Design and Implementation of an FPGA-Based Scalable Pipelined
    Design and Implementation of an FPGA-Based Scalable Pipelined Associative SIMD Processor Array with Specialized Variations for Sequence Comparison and MSIMD Operation A dissertation submitted to the Kent State University in partial fulfillment of the requirements for the degree of Doctor of Philosophy by Hong Wang November 2006 Dissertation written by Hong Wang B.A., Lanzhou University, 1990 M.A., Kent State University, 2002 Ph.D., Kent State University, 2006 Approved by ___Robert A. Walker ___________________, Chair, Doctoral Dissertation Committee ___Johnnie W. Baker ___________________, Members, Doctoral Dissertation Committee ___Kenneth E Batcher __________________ ___Eugene C. Gartland, Jr._______________ _____________________________________ Accepted by ___Robert A. Walker___________________, Chair, Department of Computer Science ___John R. D. Stalvey__________________, Dean, College of Arts and Sciences ii Table of Contents 1 INTRODUCTION........................................................................................................... 3 1.1 Overview of Associative Computing Research......................................................... 5 1.2 Associative Computing ............................................................................................... 7 1.2.1 Associative Search and Responder Processing......................................................... 7 1.2.2 Associative Reduction for Maximum / Minimum Value ......................................... 9 1.3 Applications of a Scalable Associative Processor..................................................
    [Show full text]
  • SABER: Window-Based Hybrid Stream Processing for Heterogeneous Architectures
    SABER: Window-Based Hybrid Stream Processing for Heterogeneous Architectures Alexandros Koliousis†, Matthias Weidlich†‡, Raul Castro Fernandez†, Alexander L. Wolf†, Paolo Costa], Peter Pietzuch† †Imperial College London ‡Humboldt-Universität zu Berlin ]Microsoft Research {akolious, mweidlic, rc3011, alw, costa, prp}@imperial.ac.uk ABSTRACT offered by heterogeneous servers: besides multi-core CPUs with Modern servers have become heterogeneous, often combining multi- dozens of processing cores, such servers also contain accelerators core CPUs with many-core GPGPUs. Such heterogeneous architec- such as GPGPUs with thousands of cores. As heterogeneous servers tures have the potential to improve the performance of data-intensive have become commonplace in modern data centres and cloud offer- stream processing applications, but they are not supported by cur- ings [24], researchers have proposed a range of parallel streaming al- rent relational stream processing engines. For an engine to exploit a gorithms for heterogeneous architectures, including for joins [36,51], heterogeneous architecture, it must execute streaming SQL queries sorting [28] and sequence alignment [42]. Yet, we lack the design with sufficient data-parallelism to fully utilise all available heteroge- of a general-purpose relational stream processing engine that can neous processors, and decide how to use each in the most effective transparently take advantage of accelerators such as GPGPUs while way. It must do this while respecting the semantics of streaming executing arbitrary streaming SQL queries with windowing [7]. SQL queries, in particular with regard to window handling. The design of such a streaming engine for heterogeneous archi- tectures raises three challenges, addressed in this paper: We describe SABER, a hybrid high-performance relational stream processing engine for CPUs and GPGPUs.
    [Show full text]
  • Analyzing Efficient Stream Processing on Modern Hardware
    Analyzing Efficient Stream Processing on Modern Hardware Steffen Zeuch 2, Bonaventura Del Monte 2, Jeyhun Karimov 2, Clemens Lutz 2, Manuel Renz 2, Jonas Traub 1, Sebastian Breß 1;2, Tilmann Rabl 1;2, Volker Markl 1;2 1Technische Universitat¨ Berlin, 2German Research Center for Artificial Intelligence fi[email protected] 384 Flink ABSTRACT 420 Memory Bandwidth 288 Spark Modern Stream Processing Engines (SPEs) process large 100 71 23:99 28 Storm data volumes under tight latency constraints. Many SPEs Java HC 10 6:25 execute processing pipelines using message passing on shared- 3:6 C++ HC Streambox Throughput nothing architectures and apply a partition-based scale-out [M records/s] 1 0:26 Saber strategy to handle high-velocity input streams. Further- Optimized more, many state-of-the-art SPEs rely on a Java Virtual Ma- Figure 1: Yahoo! Streaming Benchmark (1 Node). chine to achieve platform independence and speed up system development by abstracting from the underlying hardware. a large number of computing nodes in a cluster, i.e., scale- In this paper, we show that taking the underlying hard- out the processing. These systems trade single node perfor- ware into account is essential to exploit modern hardware mance for scalability to large clusters and use a Java Virtual efficiently. To this end, we conduct an extensive experimen- Machine (JVM) as the underlying processing environment tal analysis of current SPEs and SPE design alternatives for platform independence. While JVMs provide a high level optimized for modern hardware. Our analysis highlights po- of abstraction from the underlying hardware, they cannot tential bottlenecks and reveals that state-of-the-art SPEs are easily provide efficient data access due to processing over- not capable of fully exploiting current and emerging hard- heads induced by data (de-)serialization, objects scattering ware trends, such as multi-core processors and high-speed in main memory, virtual functions, and garbage collection.
    [Show full text]