Compiling Irregular Software to Specialized Hardware

Total Page:16

File Type:pdf, Size:1020Kb

Compiling Irregular Software to Specialized Hardware Compiling Irregular Software to Specialized Hardware Richard Townsend Submitted in partial fulllment of the requirements for the degree of Doctor of Philosophy in the Graduate School of Arts and Sciences COLUMBIA UNIVERSITY 2019 ©2019 Richard Townsend All rights reserved ABSTRACT Compiling Irregular Software to Specialized Hardware Richard Townsend High-level synthesis (HLS) has simplied the design process for energy-ecient hardware accelerators: a designer species an accelerator’s behavior in a “high-level” language, and a toolchain synthesizes register-transfer level (RTL) code from this specication. Many HLS sys- tems produce ecient hardware designs for regular algorithms (i.e., those with limited condi- tionals or regular memory access patterns), but most struggle with irregular algorithms that rely on dynamic, data-dependent memory access patterns (e.g., traversing pointer-based structures like lists, trees, or graphs). HLS tools typically provide imperative, side-eectful languages to the designer, which makes it dicult to correctly specify and optimize complex, memory-bound applications. In this dissertation, I present an alternative HLS methodology that leverages properties of functional languages to synthesize hardware for irregular algorithms. The main contribution is an optimizing compiler that translates pure functional programs into modular, parallel dataow networks in hardware. I give an overview of this compiler, explain how its source and target together enable parallelism in the face of irregularity, and present two specic optimizations that further exploit this parallelism. Taken together, this dissertation veries my thesis that pure functional programs exhibiting irregular memory access patterns can be compiled into specialized hardware and optimized for parallelism. This work extends the scope of modern HLS toolchains. By relying on properties of pure func- tional languages, our compiler can synthesize hardware from programs containing constructs that commercial HLS tools prohibit, e.g., recursive functions and dynamic memory allocation. Hardware designers may thus use our compiler in conjunction with existing HLS systems to accelerate a wider class of algorithms than before. Contents List of Figures v List of Tables x Acknowledgements xi 1 Introduction 1 1.1 My Thesis . 1 1.2 Structure of the Dissertation . 2 1.3 High-Level Synthesis and Irregular Algorithms . 3 1.4 Contributions . 4 2 Related Work 8 2.1 High-Level Synthesis . 8 2.1.1 Functional HLS . 9 2.1.2 Irregular HLS . 11 2.2 Hardware Dataow Networks . 13 2.3 Parallelizing Divide-and-Conquer Algorithms . 15 2.3.1 Software Techniques . 15 2.3.2 Memory Partitioning . 16 2.3.3 HLS for Divide-and-Conquer . 17 2.4 Optimizing Recursive Data Structures . 19 3 An Overview of Our Compiler 22 3.1 Our Main IR: a Variant of GHC Core . 22 3.2 The End-to-End Compilation Flow . 27 3.3 Lowering Core . 35 i 3.3.1 Removing Polymorphism . 35 3.3.2 Making Names Unique . 37 3.3.3 Lambda Lifting . 37 3.3.4 Removing Recursion . 38 3.3.5 Tagging Memory Operations . 47 3.3.6 Simplifying Case ................................ 49 3.3.7 Adding Go .................................... 50 3.3.8 Lifting Expressions . 53 3.3.9 Optimizing Reads . 54 4 From Functional Programs to Dataow Networks 55 4.1 A Restricted Dialect of Core: Floh . 56 4.1.1 An Example: Map . 57 4.2 Dataow Networks . 59 4.3 Translation from Floh to Dataow . 62 4.3.1 Translating Expressions . 62 4.3.2 Translating Simple Functions and Cases . 63 4.3.3 Translating Clustered Functions and Cases . 66 4.3.4 Putting It All Together: Translating the Map Example . 69 4.4 Dataow Networks in Hardware . 71 4.4.1 Evaluation Order . 72 4.4.2 Stateless Actors . 72 4.4.3 Stateful Actors . 73 4.4.4 Inserting Buers . 73 4.5 Experimental Evaluation . 77 4.5.1 Methodology . 77 4.5.2 Strict vs. Non-strict Tail Recursive Calls . 78 4.5.3 Sensitivity to Memory Latency . 79 4.5.4 Sensitivity to Function Latency . 80 5 Realizing Dataow Networks in Hardware 81 5.1 Specications: Kahn Networks . 83 5.1.1 Kahn Networks . 83 ii 5.1.2 Dataow Actors . 85 5.1.3 Unit-rate, Mux, and Demux Actors . 86 5.1.4 Nondeterministic Merge . 87 5.2 Hardware Dataow Actors . 88 5.2.1 Communication and Combinational Cycles . 88 5.2.2 Unit-Rate Actors . 89 5.2.3 Mux and Demux Actors . 90 5.2.4 Merge Actors . 92 5.3 Channels in Hardware . 95 5.3.1 Data and Control Buers . 95 5.3.2 Forks . 97 5.4 The Argument for Correctness . 98 5.5 Our Dataow IR: DF . 101 5.5.1 Channel Type Denitions . 101 5.5.2 Actor Instances and Type Denitions . 104 5.5.3 Checking DF Specications . 106 5.6 Experimental Evaluation . 107 5.6.1 Experimental Networks . 108 5.6.2 Random Buer Allocation . 109 5.6.3 Manual Buer Allocation . 110 5.6.4 Pipelining the Conveyor . 110 5.6.5 Memory in Dataow Networks . 111 5.6.6 Overhead of Our Method . 112 6 Optimizing Irregular Divide-and-Conquer Algorithms 114 6.1 Transforming Code . 116 6.1.1 Finding Divide-and-Conquer Functions . 116 6.1.2 Task-Level Parallelism: Copying Functions . 117 6.1.3 Memory-Level Parallelism: Copying Types . 118 6.1.4 New Types and Conversion Functions . 119 6.2 Partitioning On-Chip Memory . 119 6.3 Experimental Evaluation . 121 6.3.1 Exposing Memory-Level Parallelism . 123 iii 6.3.2 Exploiting Memory-Level Parallelism . 124 7 Packing Recursive Data Types 127 7.1 An Example: Appending Lists . 128 7.2 Packing Algorithm . 131 7.2.1 Packed Data Types . 132 7.2.2 Pack and Unpack Functions . 133 7.2.3 Injection and Hoisting . 135 7.2.4 Simplication . 138 7.2.5 Heuristics to Limit Code Growth . 143 7.3 Experimental Evaluation . 144 7.3.1 Testing Scheme . 145 7.3.2 Experimental Results . 146 8 Conclusions and Further Work 150 8.1 Conclusions . 150 8.2 Further Work . 151 8.2.1 Formalism . 151 8.2.2 Extending the Compiler . 152 8.2.3 Synthesizing a Realistic Memory System . 152 8.2.4 Improved and Additional Optimizations . 153 Bibliography 155 iv List of Figures 1.1 A visualization of the research supporting my thesis. I focus on synthesizing hard- ware from pure functional programs exhibiting irregular memory access patterns, e.g., the function map f l, which applies a function f to each element of a linked list l, storing the results in a new list (a). I translate these programs into modular, parallel dataow networks in hardware (b), and apply two optimizations that exploit more parallelism in the synthesized circuits: the rst improves memory-level parallelism in divide-and-conquer algorithms by combining a novel code transformation with a type-based memory partitioning scheme (c); the second packs more data into re- cursive types to improve spatial locality (i.e., data-level parallelism) and reduce an irregular program’s memory footprint (d). 5 3.1 The abstract syntax of the compiler’s main IR: a variant.
Recommended publications
  • A Highly Configurable High-Level Synthesis Functional Pattern Library
    electronics Article A Highly Configurable High-Level Synthesis Functional Pattern Library Lan Huang 1,2,‡, Teng Gao 1,‡, Dalin Li 1,†, Zihao Wang 1 and Kangping Wang 1,2,* 1 College of Computer Science and Technology, Jilin University, Changchun 130012, China; [email protected] (L.H.); [email protected] (T.G.); [email protected] (D.L.); [email protected] (Z.W.) 2 Key Laboratory of Symbolic Computation and Knowledge Engineering, Jilin University, Changchun 130012, China * Correspondence: [email protected] † Current address: Zhuhai Laboratory of Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Department of Computer Science and Technology, Zhuhai College of Jilin University, Zhuhai 519041, China. ‡ These authors contributed equally to this work. Abstract: FPGA has recently played an increasingly important role in heterogeneous computing, but Register Transfer Level design flows are not only inefficient in design, but also require designers to be familiar with the circuit architecture. High-level synthesis (HLS) allows developers to design FPGA circuits more efficiently with a more familiar programming language, a higher level of abstraction, and automatic adaptation of timing constraints. When using HLS tools, such as Xilinx Vivado HLS, specific design patterns and techniques are required in order to create high-performance circuits. Moreover, designing efficient concurrency and data flow structures requires a deep understanding of the hardware, imposing more learning costs on programmers. In this paper, we propose a set of functional patterns libraries based on the MapReduce model, implemented by C++ templates, Citation: Huang, L.; Gao,T.; Li, D.; which can quickly implement high-performance parallel pipelined computing models on FPGA with Wang, Z.; Wang, K.
    [Show full text]
  • Richard S. Uhler
    Smten and the Art of Satisfiability-Based Search by Richard S. Uhler B.S., University of California, Los Angeles (2008) S.M., Massachusetts Institute of Technology (2010) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2014 c Massachusetts Institute of Technology 2014. All rights reserved. Author.............................................................. Department of Electrical Engineering and Computer Science August 22, 2014 Certified by. Jack B. Dennis Professor of Computer Science and Engineering Emeritus Thesis Supervisor Accepted by . Leslie A. Kolodziejski Chairman, Committee for Graduate Students 2 Smten and the Art of Satisfiability-Based Search by Richard S. Uhler Submitted to the Department of Electrical Engineering and Computer Science on August 22, 2014, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering and Computer Science Abstract Satisfiability (SAT) and Satisfiability Modulo Theories (SMT) have been leveraged in solving a wide variety of important and challenging combinatorial search prob- lems, including automatic test generation, logic synthesis, model checking, program synthesis, and software verification. Though in principle SAT and SMT solvers sim- plify the task of developing practical solutions to these hard combinatorial search problems, in practice developing an application that effectively uses a SAT or SMT solver can be a daunting task. SAT and SMT solvers have limited support, if any, for queries described using the wide array of features developers rely on for describing their programs: procedure calls, loops, recursion, user-defined data types, recursively defined data types, and other mechanisms for abstraction, encapsulation, and modu- larity.
    [Show full text]
  • FPGA Programming for the Masses
    practice DOI:10.1145/2436256.2436271 used to implement sequential logic. Article development led by queue.acm.org Apart from the homogeneous array of logic cells, an FPGA also contains discrete components such as BRAMs The programmability of FPGAs must improve (block RAMs), digital signal process- if they are to be part of mainstream computing. ing (DSP) slices, processor cores, and various communication cores (for ex- BY DavID F. BACON, RODRIC RabbaH, AND SUNIL SHUkla ample, Ethernet MAC and PCIe). BRAMs, which are specialized mem- ory structures distributed throughout the FPGA fabric in columns, are of par- ticular importance. Each BRAM can FPGA hold up to 36Kbits of data. BRAMs can be used in various form factors and can be cascaded to form a larger logical memory structure. Because of the dis- Programming tributed organization of BRAMs, they can provide terabytes of bandwidth for memory bandwidth-intensive applica- tions. Xilinx and Altera dominate the PLD market, collectively holding more for the Masses than an 85% share. FPGAs were long considered low- volume, low-density ASIC replace- ments. Following Moore’s Law, how- ever, FPGAs are getting denser and faster. Modern-day FPGAs can have up to two million logic cells, 68Mbits WHEN LOOKING AT how hardware influences of BRAM, more than 3,000 DSP slices, computing performance, we have general-purpose and up to 96 transceivers for imple- menting multigigabit communication processors (GPPs) on one end of the spectrum and channels.23 The latest FPGA families application-specific integrated circuits (ASICs) on the from Xilinx and Altera are more like a system-on-chip (SoC), mixing dual- other.
    [Show full text]
  • Design Automation for Streaming Systems
    Design Automation for Streaming Systems Eylon Caspi Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2005-27 http://www.eecs.berkeley.edu/Pubs/TechRpts/2005/EECS-2005-27.html December 16, 2005 Copyright © 2005, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Design Automation for Streaming Systems by Eylon Caspi B.S. (University of Maryland, College Park) 1996 M.S. (University of California, Berkeley) 2000 A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY OF CALIFORNIA, BERKELEY Committee in charge: Professor John Wawrzynek, Chair Professor Edward A. Lee Professor Robert Jacobsen Fall 2005 Design Automation for Streaming Systems Copyright c 2005 by Eylon Caspi Abstract Design Automation for Streaming Systems by Eylon Caspi Doctor of Philosophy in Computer Science University of California, Berkeley Professor John Wawrzynek, Chair RTL design methodologies are struggling to meet the challenges of modern, large system design. Their reliance on manually timed design with fully exposed device resources is laborious, restricts reuse, and is increasingly ineffective in an era of Moore’s Law expansion and growing interconnect delay.
    [Show full text]
  • Review of FPD's Languages, Compilers, Interpreters and Tools
    ISSN 2394-7314 International Journal of Novel Research in Computer Science and Software Engineering Vol. 3, Issue 1, pp: (140-158), Month: January-April 2016, Available at: www.noveltyjournals.com Review of FPD'S Languages, Compilers, Interpreters and Tools 1Amr Rashed, 2Bedir Yousif, 3Ahmed Shaban Samra 1Higher studies Deanship, Taif university, Taif, Saudi Arabia 2Communication and Electronics Department, Faculty of engineering, Kafrelsheikh University, Egypt 3Communication and Electronics Department, Faculty of engineering, Mansoura University, Egypt Abstract: FPGAs have achieved quick acceptance, spread and growth over the past years because they can be applied to a variety of applications. Some of these applications includes: random logic, bioinformatics, video and image processing, device controllers, communication encoding, modulation, and filtering, limited size systems with RAM blocks, and many more. For example, for video and image processing application it is very difficult and time consuming to use traditional HDL languages, so it’s obligatory to search for other efficient, synthesis tools to implement your design. The question is what is the best comparable language or tool to implement desired application. Also this research is very helpful for language developers to know strength points, weakness points, ease of use and efficiency of each tool or language. This research faced many challenges one of them is that there is no complete reference of all FPGA languages and tools, also available references and guides are few and almost not good. Searching for a simple example to learn some of these tools or languages would be a time consuming. This paper represents a review study or guide of almost all PLD's languages, interpreters and tools that can be used for programming, simulating and synthesizing PLD's for analog, digital & mixed signals and systems supported with simple examples.
    [Show full text]
  • Generating Configurable Hardware from Parallel Patterns
    Generating Configurable Hardware from Parallel Patterns Raghu Prabhakar David Koeplinger Kevin Brown HyoukJoong Lee Christopher De Sa Christos Kozyrakis Kunle Olukotun Pervasive Parallelism Laboratory Stanford University {raghup17, dkoeplin, kjbrown, hyouklee, cdesa, kozyraki, kunle}@stanford.edu Abstract systems [23] and machine learning algorithms like deep belief networks [31, 32]. In recent years the computing landscape has seen an in- creasing shift towards specialized accelerators. Field pro- The performance and energy advantages of FPGAs are now grammable gate arrays (FPGAs) are particularly promising motivating the integration of reconfigurable logic into data as they offer significant performance and energy improvements center computing infrastructures. Both Microsoft [37] and compared to CPUs for a wide class of applications and are Baidu [31] have recently announced such systems. These far more flexible than fixed-function ASICs. However, FPGAs systems have initially been in the form of banks of FPGA are difficult to program. Traditional programming models accelerators which communicate with CPUs through Infini- for reconfigurable logic use low-level hardware description band or PCIe [28]. Work is also being done on heterogeneous languages like Verilog and VHDL, which have none of the pro- motherboards with shared CPU-FPGA memory [22]. The re- ductivity features of modern software development languages cent acquisition of Altera by Intel suggests that systems with but produce very efficient designs, and low-level software lan- tighter, high performance on-chip integration of CPUs and guages like C and OpenCL coupled with high-level synthesis FPGAs are now on the horizon. (HLS) tools that typically produce designs that are far less The chief limitation of using FPGAs as general purpose efficient.
    [Show full text]
  • Generative Programming and Dsls in Performance Critical Systems
    Go Meta! A Case for Generative Programming and DSLs in Performance Critical Systems Tiark Rompf1, Kevin J. Brown2, HyoukJoong Lee2, Arvind K. Sujeeth2, Manohar Jonnalagedda4, Nada Amin4, Georg Ofenbeck5, Alen Stojanov5, Yannis Klonatos3, Mohammad Dashti3, Christoph Koch3, Markus Püschel5, and Kunle Olukotun2 1 Purdue University, USA, {first}@purdue.edu 2 Stanford University, USA, {kjbrown,hyouklee,asujeeth,kunle}@stanford.edu 3 EPFL DATA, Switzerland, {first.last}@epfl.ch 4 EPFL LAMP, Switzerland, {first.last}@epfl.ch 5 ETH Zürich, Switzerland, {ofgeorg,astojanov,pueschel}@inf.ethz.ch Abstract Most performance critical software is developed using very low-level techniques. We argue that this needs to change, and that generative programming is an effective avenue to enable the use of high-level languages and programming techniques in many such circumstances. 1998 ACM Subject Classification D.1 Programming Techniques Keywords and phrases Performance, Generative Programming, Staging, DSLs Digital Object Identifier 10.4230/LIPIcs.SNAPL.2015.238 1 The Cost of Performance Performance critical software is almost always developed in C and sometimes even in assembly. While implementations of high-level languages have come a long way, programmers do not trust them to deliver the same reliable performance. This is bad, because low-level code in unsafe languages attracts security vulnerabilities and development is far less agile and productive. It also means that PL advances are mostly lost on programmers operating under tight performance constraints. Furthermore, in the age of heterogeneous architectures, “big data” workloads and cloud computing, a single hand-optimized C codebase no longer provides the best, or even good, performance across different target platforms with diverse programming models (multi-core, clusters, NUMA, GPU, .
    [Show full text]
  • A Case for Generative Programming and Dsls in Performance Critical Systems
    Go Meta! A Case for Generative Programming and DSLs in Performance Critical Systems Tiark Rompf1, Kevin J. Brown2, HyoukJoong Lee2, Arvind K. Sujeeth2, Manohar Jonnalagedda4, Nada Amin4, Georg Ofenbeck5, Alen Stojanov5, Yannis Klonatos3, Mohammad Dashti3, Christoph Koch3, Markus Püschel5, and Kunle Olukotun2 1 Purdue University, USA {first}@purdue.edu 2 Stanford University, USA {kjbrown,hyouklee,asujeeth,kunle}@stanford.edu 3 EPFL DATA, Switzerland {first.last}@epfl.ch 4 EPFL LAMP, Switzerland {first.last}@epfl.ch 5 ETH Zurich, Switzerland {ofgeorg,astojanov,pueschel}@inf.ethz.ch Abstract Most performance critical software is developed using very low-level techniques. We argue that this needs to change, and that generative programming is an effective avenue to enable the use of high-level languages and programming techniques in many such circumstances. 1998 ACM Subject Classification D.1 Programming Techniques Keywords and phrases Performance, Generative Programming, Staging, DSLs Digital Object Identifier 10.4230/LIPIcs.xxx.yyy.p 1 The Cost of Performance Performance critical software is almost always developed in C and sometimes even in assem- bly. While implementations of high-level languages have come a long way, programmers do not trust them to deliver the same reliable performance. This is bad, because low-level code in unsafe languages attracts security vulnerabilities and development is far less agile and productive. It also means that PL advances are mostly lost on programmers operating un- der tight performance constraints. Furthermore, in the age of heterogeneous architectures, “big data” workloads and cloud computing, a single hand-optimized C codebase no longer provides the best, or even good, performance across different target platforms with diverse programming models (multi-core, clusters, NUMA, GPU, ...).
    [Show full text]
  • Sixth ACM SIGPLAN International Conference on Functional Programming (ICFP'01)
    volume 36 num be r 10 october 2001 \ . J • 1 - . A MO/1thly Publicatio/1 of the Special l/1terest Group on Programmi/1g La/1guages ) • • • ( J • • • • • • , 1 • • c ..., C 1 J ~ PRE S S :........ ~ Proceedings of the 2001 ACM SIG PLAN • • ................................·• International Conference on Functional Programming (ICFP'01) ....". ~ PRESS Proceedings of the Sixth ACM SIGPLAN International Conference on Functional Programming (ICFP'01) Florence, Italy September 3-5, 2001 Sponsored by the Association for Computing Machinery Special Interest Group on Programming Languages (SIGPLAN) with support from European Association for Programming Languages and Systems & Cybertex Table of Contents ICFP Conference/Program Committee. ...................... .............. .......... ..................... ................... .. ...... VI ICFP Sponsors .... ..... .. ..... .. ... .... ...... ... ....... .. ..... ...............................................................................................VIIJ Monday, September 3, 2001 9:00-10:00 • Invited Talk 1 A Fresh Approach to Representing Syntax with Static Binders in Functional Programming ..... 1 Andrew M . Pins (Cambridge University Computer Laboratory) 10:30-12:30. Session 1 Contification Using Dominators . .. .. ........ .. ........ ...... ........ ... 2 Matthew Fluet (Cornel! University), Stephen Weeks (InlerTrus/ STAR Labora/ories) Functioning without Closure: Type-Safe Customized Function Representations for Standard ML ... .............................. .. .. .. ......... ...............
    [Show full text]
  • In-System FPGA Prototyping of an Itanium Microarchitecture
    In the proceedings of the 22nd International Conference on Computer Design (ICCD 2004), October 2004 In-System FPGA Prototyping of an Itanium Microarchitecture Roland E. Wunderlich and James C. Hoe Computer Architecture Laboratory at Carnegie Mellon {rolandw, jhoe}@ece.cmu.edu effort, and its primary components were calibrated for Abstract effective performance modeling. We present the details of this work in the following We describe an effort to prototype an Itanium mi- sections. Section 2 describes the Bluespec HDL used for croarchitecture using an FPGA. The microarchitecture microarchitecture modeling. Section 3 presents the model is written in the Bluespec hardware description Itanium microarchitecture model in detail. Section 4 language (HDL) and supports a subset of the Itanium describes our FPGA prototyping platform and its instruction set architecture. The microarchitecture model operation. Finally, Section 5 presents an assessment of includes details such as multi-bundle instruction fetch, the prototyped processor model and the results from a decode and issue; parallel pipelined execution units with microarchitecture design study. We offer our conclusions scoreboarding and predicated bypassing; and multiple in Section 6. levels of cache hierarchies. The microarchitecture model is synthesized and prototyped on a special FPGA card 2. Bluespec HDL that allows the processor model to interface directly to the memory bus of a host PC. This is an effort toward Bluespec is a synthesizable high-level HDL for rapid developing a flexible microprocessor prototyping ASIC development [2]. A key advantage of Bluespec is framework for rapid design exploration. the ability to describe hardware designs clearly and concisely, leading to fewer errors and faster design 1.
    [Show full text]
  • Cλash: Structural Descriptions of Synchronous Hardware Using Haskell
    CλaSH: Structural Descriptions of Synchronous Hardware using Haskell Christiaan Baaij, Matthijs Kooijman, Jan Kuper, Arjan Boeijink, Marco Gerards Computer Architecture for Embedded Systems (CAES) Department of EEMCS, University of Twente P.O. Box 217, 7500 AE, Enschede, The Netherlands [email protected], [email protected], [email protected] Abstract—CλaSH is a functional hardware description lan- (case-expressions, pattern-matching, etc.) are evaluated at the guage that borrows both its syntax and semantics from the func- time the graph is being build, they are no longer visible to the tional programming language Haskell. Polymorphism and higher- embedded compiler that processes the graph. Consequently, order functions provide a level of abstraction and generality that allow a circuit designer to describe circuits in a more natural way it is impossible to capture Haskell’s choice elements within than possible with the language elements found in the traditional a circuit description when taking the embedded language hardware description languages. approach. This does not mean that circuits specified in an Circuit descriptions can be translated to synthesizable VHDL embedded language can not contain choice, just that choice using the prototype CλaSH compiler. As the circuit descriptions, elements only exist as functions, e.g. a multiplexer function, simulation code, and test input are also valid Haskell, complete simulations can be done by a Haskell compiler or interpreter, and not as syntactic elements of the language itself. allowing high-speed simulation and analysis. This research uses (a subset of) the Haskell language itself for the purpose of describing hardware. As a result, certain I.
    [Show full text]
  • A Survey and Evaluation of FPGA High-Level Synthesis Tools
    This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCAD.2015.2513673, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems TCAD, VOL. X, NO. Y, DECEMBER 2015 1 A Survey and Evaluation of FPGA High-Level Synthesis Tools Razvan Nane, Member, IEEE, Vlad-Mihai Sima, Christian Pilato, Member, IEEE, Jongsok Choi, Student Member, IEEE, Blair Fort, Student Member, IEEE, Andrew Canis, Student Member, IEEE, Yu Ting Chen, Student Member, IEEE, Hsuan Hsiao, Student Member, IEEE, Stephen Brown, Member, IEEE, Fabrizio Ferrandi, Member, IEEE, Jason Anderson, Member, IEEE, Koen Bertels Member, IEEE Abstract—High-level synthesis (HLS) is increasingly popular at a low level of abstraction, where cycle-by-cycle behavior for the design of high-performance and energy-efficient heteroge- is completely specified. The use of such languages requires neous systems, shortening time-to-market and addressing today’s advanced hardware expertise, besides being cumbersome to system complexity. HLS allows designers to work at a higher-level of abstraction by using a software program to specify the hard- develop in. This leads to longer development times that can ware functionality. Additionally, HLS is particularly interesting critically impact the time-to-market. for designing FPGA circuits, where hardware implementations An interesting solution to realize such heterogeneity and, can be easily refined and replaced in the target device. Recent at the same time, address the time-to-market problem is the years have seen much activity in the HLS research community, combination of reconfigurable hardware architectures, such with a plethora of HLS tool offerings, from both industry and academia.
    [Show full text]