Enabling Development of Opencl Applications on FPGA Platforms

Total Page:16

File Type:pdf, Size:1020Kb

Enabling Development of Opencl Applications on FPGA Platforms Enabling Development of OpenCL Applications on FPGA platforms Kavya S Shagrithaya Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Computer Engineering Peter M. Athanas, Chair Patrick R. Schaumont Paul E. Plassmann August 6, 2012 Blacksburg, Virginia Keywords: OpenCL, AutoESL, FPGA, Convey, HPC Copyright 2012, Kavya S Shagrithaya Enabling Development of OpenCL Applications on FPGA platforms Kavya S Shagrithaya (ABSTRACT) FPGAs can potentially deliver tremendous acceleration in high-performance server and em- bedded computing applications. Whether used to augment a processor or as a stand-alone device, these reconfigurable architectures are being deployed in a large number of implemen- tations owing to the massive amounts of parallelism offered. At the same time, a significant challenge encountered in their wide-spread acceptance is the laborious efforts required in programming these devices. The increased development time, level of experience needed by the developers, lower turns per day and difficulty involved in faster iterations over designs affect the time-to-market for many solutions. High-level synthesis aims towards increasing the productivity of FPGAs and bringing them within the reach software developers and domain experts. OpenCL is a specification introduced for parallel programming purposes across platforms. Applications written in OpenCL consist of two parts - a host program for initialization and management, and kernels that define the compute intensive tasks. In this thesis, a compilation flow to generate customized application-specific hardware descrip- tions from OpenCL computation kernels is presented. The flow uses Xilinx AutoESL tool to obtain the design specification for compute cores. An architecture provided integrates the cores with memory and host interfaces. The host program in the application is compiled and executed to demonstrate a proof-of-concept implementation towards achieving an end-to-end flow that provides abstraction of hardware at the front-end. Acknowledgements Firstly, I am sincerely thankful to my advisor, Dr. Peter Athanas for providing me an opportunity to be a part of the Configurable Computing Lab. He has been a source of inspiration throughout and has been very encouraging at all times. I would like to express my deep gratitude for his continual support and invaluable guidance without which I could not have progressed in my research. I have learnt a lot from him, in academics and beyond. I would like to thank Dr. Patrick Schaumont and Dr. Paul Plassman for serving as members of my committee. The courses that I took under Dr. Schaumont were a great learning experience and I immensely enjoyed his style of teaching. My parents and my brother have been there for me, through thick and thin. I am very grateful to them for encouraging me towards pursuing graduate studies and supporting me throughout. Special thanks to my father, my role model, for all his words of wisdom. The environment in the lab has always been conducive to learning things the enjoyable way. I would like to thank my lab mates; Tony, for his advice on all work-related matters; Karl, for helping me ramp up in the intial days; Krzysztof, for his help and guidance on this project; Kaiwen, for his inputs towards this work; and the rest of the CCM family, David, Ritesh, Ramakrishna, Keyrun, Xin Xin, Wen Wei, Andrew, Richard, Shaver, Jason, Ryan, Teresa, Abhay and Jacob for making this a truly enriching experience. My heartfelt thanks to Kaushik and Abhinav for their support in curricular and non- curricular aspects; Iccha, Vishwas, Vikas, Shivkumar, Navish, Atashi and Mahesh for the fun times off work; and all my other friends at Blacksburg who have made these two years very memorable for me. I am thankful to God for everything. iii Contents 1 Introduction 1 1.1 Motivation..................................... 2 1.2 Contributions ................................... 3 1.3 ThesisOrganization. .. .. .. .. .. ... .. .. .. .. .. ... .. .. 4 2 Background 5 2.1 High-LevelSynthesis .............................. 5 2.2 OverviewofOpenCL ............................... 8 2.2.1 OpenCLmodels.............................. 9 PlatformModel.............................. 9 ExecutionModel ............................. 9 MemoryModel .............................. 10 ProgrammingModel ........................... 11 2.2.2 OpenCLFramework ........................... 12 2.2.3 OpenCLCProgrammingLanguage . 12 2.3 Relatedwork ................................... 13 2.4 Summary ..................................... 15 3 Approach and Implementation 16 3.1 Approach ..................................... 16 3.1.1 AnIntroductiontoAutoESL . 17 3.2 ImplementationSpecifics . ... 17 iv 3.2.1 OpenCL to AutoESL Source-to-Source Translation . ..... 18 3.2.2 AutoESLSynthesis ............................ 23 3.2.3 IntegrationandMappingonConvey . 23 3.2.4 HostLibrary ............................... 26 3.3 Summary ..................................... 28 4 Results 29 4.1 CaseStudy:VectorAddition . 29 4.1.1 PerformanceandResourceUtilization. .... 31 4.2 CaseStudy: MatrixMultiplication . ..... 33 4.3 ComparisonofMethodologies . ... 36 4.4 ChallengesinUsingOpenCL. 38 4.5 Summary ..................................... 38 5 Conclusion 39 5.1 FutureWork.................................... 39 Bibliography 41 AppendixA:DotFileDescriptionforVectorAddition 44 Appendix B: OpenCL API Implementations for Kernel Execution 47 v List of Figures 2.1 Few high-level synthesis tools and their source inputs . ............ 6 2.2 The platform model for OpenCL architecture [12] . ........ 9 2.3 An example of an NDRange index space where N = 2 ............. 10 2.4 OpenCL memory regions and their relation to the executionmodel ..... 11 3.1 Overviewofthedesignflow . 19 3.2 Dot format visualization for the vector addition kernel ............ 21 3.3 SystemarchitectureonConvey . ... 25 4.1 Execution flow for the vector addition application . .......... 30 4.2 Performance results for different vector sizes . .......... 31 4.3 Resource Utilization for vector addition application forasingleAE ..... 33 4.4 Matrixmultiplicationoperation . ...... 33 vi List of Tables 4.1 Performance results for vector addition application for vectors of size 1024 . 31 4.2 Area Estimates for a single core from AutoESL report . ....... 32 4.3 Comparison with SOpenCL and OpenRCL implementations . ....... 37 4.4 Comparison with Altera’s OpenCL for FPGAs program . ..... 37 vii Chapter 1 Introduction An increasing number of application areas today are embracing high performance com- puting (HPC) solutions for their processing needs. With frequency scaling having run its course, conventional processors have given way to various accelerator technologies to meet the computational demands. These range from Graphic Processing Units (GPU), Field Pro- grammable Gate Arrays (FPGA), heterogeneous multicore processors like Cell to hybrid architectures like Convey HC-1. Different classes of applications employ different targets best suited for the problem at hand. Each of the accelerator technologies possess advantages over the other for variant tasks leading to possibilities of a heterogeneous mix of architectures [1–3]. Reconfigurable systems like FPGAs provide promising opportunities for acceleration in many fields due to their inherent flexibility and massive parallel computation capabilities. The availability of a multitude of hardware devices require comprehensive approaches to solu- tion which include the knowledge of underlying architecture along with methods of designing the algorithm. This translates to an increase in implementation effort, high learning curves and architecture aware programming at the developer’s end. In 2004, DARPA launched a project named High Productivity Computing Systems (HPCS) that aimed at providing eco- 1 2 nomically viable high productivity systems[4]. They proposed using “time to solution” as a metric that includes the time to develop a solution as well the time taken to execute it. CPUs and GPUs are programmed in high level languages like C/C++ and CUDA. This level of abstraction enables faster deployment of solutions. In the case of FPGAs, implementations require tedious hardware design and debug which greatly impact the development time. 1.1 Motivation Enormous acceleration can be delivered by FPGAs owing to the flexibility and parallelism provided by the fine grained architecture. However, being able to fully harness this potential presents challenges in terms of programmability. Implementation of applications on FPGAs involve cumbersome RTL programming and manual optimizations. Besides domain expertise and software design skills, developers are required to understand intricate details of hardware design including timing closure, state machine control and cycle accurate implementations. As a result the effort in drastically reducing the execution time translates to a significant increase in the time to develop the solution. Jeff Chase et al implemented a tensor-based real time optical flow algorithm on FPGA and GPU platforms. A fixed-point version was mapped on a Virtex II Pro XC2VP30 FPGA in the Xilinx XUP V2P board and a single- precision floating-point implementation was mapped on an Nvidia Geforce 880 GTX GPU with appropriate design optimizations for each platform. Comparisons were drawn on both implementations on various parameters including performance, power, and productivity. The work presented comparable
Recommended publications
  • Review of FPD's Languages, Compilers, Interpreters and Tools
    ISSN 2394-7314 International Journal of Novel Research in Computer Science and Software Engineering Vol. 3, Issue 1, pp: (140-158), Month: January-April 2016, Available at: www.noveltyjournals.com Review of FPD'S Languages, Compilers, Interpreters and Tools 1Amr Rashed, 2Bedir Yousif, 3Ahmed Shaban Samra 1Higher studies Deanship, Taif university, Taif, Saudi Arabia 2Communication and Electronics Department, Faculty of engineering, Kafrelsheikh University, Egypt 3Communication and Electronics Department, Faculty of engineering, Mansoura University, Egypt Abstract: FPGAs have achieved quick acceptance, spread and growth over the past years because they can be applied to a variety of applications. Some of these applications includes: random logic, bioinformatics, video and image processing, device controllers, communication encoding, modulation, and filtering, limited size systems with RAM blocks, and many more. For example, for video and image processing application it is very difficult and time consuming to use traditional HDL languages, so it’s obligatory to search for other efficient, synthesis tools to implement your design. The question is what is the best comparable language or tool to implement desired application. Also this research is very helpful for language developers to know strength points, weakness points, ease of use and efficiency of each tool or language. This research faced many challenges one of them is that there is no complete reference of all FPGA languages and tools, also available references and guides are few and almost not good. Searching for a simple example to learn some of these tools or languages would be a time consuming. This paper represents a review study or guide of almost all PLD's languages, interpreters and tools that can be used for programming, simulating and synthesizing PLD's for analog, digital & mixed signals and systems supported with simple examples.
    [Show full text]
  • Reconfigurable Computing in the New Age of Parallelism
    Reconfigurable Computing in the New Age of Parallelism Walid Najjar and Jason Villarreal Department of Computer Science and Engineering University of California Riverside Riverside, CA 92521, USA {najjar,villarre}@cs.ucr.edu Abstract. Reconfigurable computing is an emerging paradigm enabled by the growth in size and speed of FPGAs. In this paper we discuss its place in the evolution of computing as a technology as well as the role it can play in the cur- rent technology outlook. We discuss the evolution of ROCCC (Riverside Opti- mizing Compiler for Configurable Computing) in this context. Keywords: Reconfigurable computing, FPGAs. 1 Introduction Reconfigurable computing (RC) has emerged in recent years as a novel computing paradigm most often proposed as complementing traditional CPU-based computing. This paper is an attempt to situate this computing model in the historical evolution of computing in general over the past half century and in doing so define the parameters of its viability, its potentials and the challenges it faces. In this section we briefly summarize the main factors that have contributed to the current rise of RC as a computing paradigm. The RC model, its potentials and chal- lenges are described in Section 2. Section 3 describes the ROCCC 2.0 (Riverside Optimizing Compiler for Configurable Computing), a C to HDL compilation tool who objective is to raise the programming abstraction level for RC while providing the user with both a top down and a bottom-up approach to designing FPGA-based code accelerators. 1.1 The Role of the von Neumann Model Over a little more than half a century, computing has emerged from non-existence, as a technology, to being a major component in the world’s economy.
    [Show full text]
  • PACT HDL: AC Compiler Targeting Asics and Fpgas With
    PACT HDL: A C Compiler Targeting ASICs and FPGAs with Power and Performance Optimizations* Alex Jones Debabrata Bagchi Satrajit Pal Xiaoyong Tang Alok Choudhary Prith Banerjee Center for Parallel and Distributed Computing Department of Electrical and Computer Engineering Technological Institute, Northwestern University 2145 Sheridan Road, Evanston, IL 60208-3118 Phone: (847) 491-3641 Fax: (847) 491-4455 Email: {akjones, bagchi, satrajit, tang, choudhar, banerjee}@ece.northwestern.edu ABSTRACT 1. INTRODUCTION Chip fabrication technology continues to plunge deeper into sub- As chip fabrication processes progress deep into the sub-micron micron levels requiring hardware designers to utilize ever- level and Integrated Circuits (ICs) and Field Programmable Gate increasing amounts of logic and shorten design time. Toward that Arrays (FPGAs) can support larger and larger amounts of logic, end, high-level languages such as C/C++ are becoming popular system designers require increasingly high-level tools to keep up. for hardware description and synthesis in order to more quickly Recently, industry has targeted C/C++ and variants as potential leverage complex algorithms. Similarly, as logic density long-term replacements for Hardware Description Languages increases due to technology, power dissipation becomes a (HDLs) such as VHDL and Verilog currently employed for progressively more important metric of hardware design. PACT today’s hardware design. Also, as technologies increase in HDL, a C to HDL compiler, merges automated hardware density in both the fabricated and reconfigurable areas, power- synthesis of high-level algorithms with power and performance consumption becomes a progressively more important problem. optimizations and targets arbitrary hardware architectures, While some work has been done in targeting C/C++ as an HDL particularly in a System on a Chip (SoC) setting that incorporates and considering power-consumption in hardware synthesis, reprogrammable and application-specific hardware.
    [Show full text]
  • Embedded Processing Using Fpgas Agenda
    Embedded Processing Using FPGAs Agenda • Why FPGA Platform Based Embedded Processing • Embedded Use Models And Their FPGA Based Solutions • Architecture/Topology Choices • A Reoccurring Question: Hardware Or Software • Reconfigurable Hardware • Tool Flows For FPGA Based Embedded Systems 2 - Embedded Processing using FPGAs www.xilinx.com Agenda • Why FPGA Platform Based Embedded Processing • Embedded Use Models And Their FPGA Based Solutions • Architecture/Topology Choices • A Reoccurring Question: Hardware Or Software • Reconfigurable Hardware • Tool Flows For FPGA Based Embedded Systems 3 - Embedded Processing using FPGAs www.xilinx.com Why Use Processors In the First Place • Microcontrollers (µC) and Microprocessors (µP) Provide a Higher Level of Design Abstraction – Most µC functions can be implemented using VHDL or Verilog - downsides are parallelism & complexity – Using C/C++ abstraction & serial execution make certain functions much easier to implement in a µC • Discrete µCs are Inexpensive and Widely Used – µCs have years of momentum and software designers have vast experience using them 4 - Embedded Processing using FPGAs www.xilinx.com Why Embedded Design using FPGAs In Addition To The Universal Drive Towards Smaller Cheaper Faster With Less Power…. 1 Difficult to Find the Required Mix of Peripherals in Off the Shelf (OTS) Microcontroller Solutions •2 Selecting a Single Processor Core with Long Term Solution Viability is Difficult at Best •3 Without Direct Ownership of the Processing Solution, Obsolescence is Always a Concern
    [Show full text]
  • HDL and Programming Languages ■ 6 Languages ■ 6.1 Analogue Circuit Design ■ 6.2 Digital Circuit Design ■ 6.3 Printed Circuit Board Design ■ 7 See Also
    Hardware description language - Wikipedia, the free encyclopedia 페이지 1 / 11 Hardware description language From Wikipedia, the free encyclopedia In electronics, a hardware description language or HDL is any language from a class of computer languages, specification languages, or modeling languages for formal description and design of electronic circuits, and most-commonly, digital logic. It can describe the circuit's operation, its design and organization, and tests to verify its operation by means of simulation.[citation needed] HDLs are standard text-based expressions of the spatial and temporal structure and behaviour of electronic systems. Like concurrent programming languages, HDL syntax and semantics includes explicit notations for expressing concurrency. However, in contrast to most software programming languages, HDLs also include an explicit notion of time, which is a primary attribute of hardware. Languages whose only characteristic is to express circuit connectivity between a hierarchy of blocks are properly classified as netlist languages used on electric computer-aided design (CAD). HDLs are used to write executable specifications of some piece of hardware. A simulation program, designed to implement the underlying semantics of the language statements, coupled with simulating the progress of time, provides the hardware designer with the ability to model a piece of hardware before it is created physically. It is this executability that gives HDLs the illusion of being programming languages, when they are more-precisely classed as specification languages or modeling languages. Simulators capable of supporting discrete-event (digital) and continuous-time (analog) modeling exist, and HDLs targeted for each are available. It is certainly possible to represent hardware semantics using traditional programming languages such as C++, although to function such programs must be augmented with extensive and unwieldy class libraries.
    [Show full text]
  • Evaluation of the Hardwired Sequence Control System Generated by High-Level Synthesis,” Proc
    This is the accepted version of the following article: Naoki Fujieda, Shuichi Ichikawa, Yoshiki Ishigaki, and Tasuku Tanaka, “Evaluation of the hardwired sequence control system generated by high-level synthesis,” Proc. 2017 IEEE International Symposium on Industrial Electronics (ISIE 2017) (06/2017), which has been published in final form at http://dx.doi.org/10.1109/ISIE.2017.8001426. ⃝c 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. Evaluation of the hardwired sequence control system generated by high-level synthesis Naoki Fujieda, Shuichi Ichikawa, Yoshiki Ishigaki, and Tasuku Tanaka Department of Electrical and Electronic Information Engineering, Toyohashi University of Technology, Toyohashi, Aichi, Japan [email protected] (Naoki Fujieda), [email protected] (Shuichi Ichikawa) Abstract—This study presents the application of the commer- because the logic circuit is more difficult to target for analysis cial High Level Synthesis (HLS) to a hardwired control appli- and duplication than software. cation with quantitative comparison to the traditional approach that uses logic synthesis with HDL. Though the derived circuits In early studies, Hardware Description Languages (HDL) from HLS are comparable to that of logic synthesis, the design and logic synthesis systems were used to generate a hardwired trade-offs in HLS are difficult to control. This study also presents control circuit.
    [Show full text]
  • An Automated Flow to Generate Hardware Computing Nodes from C for an FPGA-Based MPI Computing Network
    An Automated Flow to Generate Hardware Computing Nodes from C for an FPGA-Based MPI Computing Network by D.Y. Wang A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF APPLIED SCIENCE DIVISION OF ENGINEERING SCIENCE FACULTY OF APPLIED SCIENCE AND ENGINEERING UNIVERSITY OF TORONTO Supervisor: Paul Chow April 2008 Abstract Recently there have been initiatives from both the industry and academia to explore the use of FPGA-based application-specific hardware acceleration in high-performance computing platforms as traditional supercomputers based on clusters of generic CPUs fail to scale to meet the growing demand of computation-intensive applications due to limitations in power consumption and costs. Research has shown that a heteroge- neous system built on FPGAs exclusively that uses a combination of different types of computing nodes including embedded processors and application-specific hardware accelerators is a scalable way to use FPGAs for high-performance computing. An ex- ample of such a system is the TMD [11], which also uses a message-passing network to connect the computing nodes. However, the difficulty in designing high-speed hardware modules efficiently from software descriptions is preventing FPGA-based systems from being widely adopted by software developers. In this project, an auto- mated tool flow is proposed to fill this gap. The AUTO flow is developed to auto- matically generate a hardware computing node from a C program that can be used directly in the TMD system. As an example application, a Jacobi heat-equation solver is implemented in a TMD system where a soft processor is replaced by a hardware computing node generated using the AUTO flow.
    [Show full text]
  • Advances in Architectures and Tools for Fpgas and Their Impact on the Design of Complex Systems for Particle Physics
    Advances in Architectures and Tools for FPGAs and their Impact on the Design of Complex Systems for Particle Physics Anthony Gregersona, Amin Farmahini-Farahania, William Plishkerb, Zaipeng Xiea, Katherine Comptona, Shuvra Bhattacharyyab, Michael Schultea a University of Wisconsin - Madison b University of Maryland - College Park fagregerson, farmahinifar, [email protected] fplishker, [email protected] fcompton, [email protected] Abstract these trends has been a rapid adoption of FPGAs in HEP elec- tronics. A large proportion of the electronics in the Compact The continual improvement of semiconductor technology has Muon Solenoid Level-1 Trigger, for example, are based on FP- provided rapid advancements in device frequency and density. GAs, and many of the remaining ASICs are scheduled to be Designers of electronics systems for high-energy physics (HEP) replaced with FPGAs in proposed upgrades [1]. have benefited from these advancements, transitioning many de- signs from fixed-function ASICs to more flexible FPGA-based Improvements in FPGA technology are not likely to end platforms. Today’s FPGA devices provide a significantly higher soon. Today’s high-density FPGAs are based on a 40-nm sil- amount of resources than those available during the initial Large icon process and already contain an order of magnitude more Hadron Collider design phase. To take advantage of the ca- logic than the FPGAs available at planning stage of the Large pabilities of future FPGAs in the next generation of HEP ex- Hadron Collider’s electronics. 32 and 22 nm silicon process periments, designers must not only anticipate further improve- technologies have already been demonstrated to be feasible; ments in FPGA hardware, but must also adopt design tools and as FPGAs migrate to these improved technologies their logic methodologies that can scale along with that hardware.
    [Show full text]
  • Download The
    Techniques for Enabling In-System Observation-based Debug of High-Level Synthesis Circuits on FPGAs by Jeffrey Goeders BASc Computer Engineering, University Toronto, 2010 MASc Computer Engineering, The University of British Columbia, 2012 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Doctor of Philosophy in THE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES (Electrical and Computer Engineering) The University of British Columbia (Vancouver) September 2016 c Jeffrey Goeders, 2016 Abstract High-level synthesis (HLS) is a rapidly growing design methodology that allows designers to create digital circuits using a software-like specification language. HLS promises to increase the productivity of hardware designers in the face of steadily increasing circuit sizes, and broaden the availability of hardware acceleration, allowing software designers to reap the benefits of hardware implementation. One roadblock to HLS adoption is the lack of an in-system debugging infrastructure. Existing debug technologies are limited to software emulation and cannot be used to find bugs that only occur in the final operating environment. This dissertation investigates techniques for observing HLS circuits, allowing designers to debug the circuit in the context of the original source code, while it executes at-speed in the normal operating environment. This dissertation is comprised of four major contributions toward this goal. First, we develop a debugging framework that provides users with a basic software-like debug experience, including single- stepping, breakpoints and variable inspection. This is accomplished by automatically inserting special- ized debug instrumentation into the user’s circuit, allowing an external debugger to observe the circuit. Debugging at-speed is made possible by recording circuit execution in on-chip memories and retrieving the data for offline analysis.
    [Show full text]
  • The GENCOD Project: Automated Generation of Hardware Code For
    The GENCOD project : Automated generation of Hardware code for safety critical applications on FPGA targets Pascal Pampagnin, Ludovic Letellier To cite this version: Pascal Pampagnin, Ludovic Letellier. The GENCOD project : Automated generation of Hardware code for safety critical applications on FPGA targets. ERTS2 2010, Embedded Real Time Software & Systems, May 2010, Toulouse, France. hal-02264384 HAL Id: hal-02264384 https://hal.archives-ouvertes.fr/hal-02264384 Submitted on 6 Aug 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The GENCOD project : Automated generation of Hardware code for safety critical applications on FPGA targets Pascal Pampagnin1, Ludovic Letellier2, 1: Airbus Operations SAS, 316 route de Bayonne, 31060 Toulouse, France 2: Alyotech Innovations, 22, bd Déodat de Séverac, 31770 Colomiers, France Abstract: GENCOD is a research project for year ‘80 ’90 ‘00 ‘05 ‘10 solutions to automated generation of safe code for µP 286 Intel Intel ® Intel ® Intel ® Field Programmable Gate Arrays (FPGA) targets. 486 Pentium Itanium Itanium The paper will describe typical ASIC/FPGA workflow, ™ ® 4 ® 2 ® i7 and current implementation for airborne electronic Transi 104 106 42.106 41.107 2.109 hardware design.
    [Show full text]
  • Implementation of Fpga-Based Object Tracking Algorithm
    IMPLEMENTATION OF FPGA-BASED OBJECT TRACKING ALGORITHM A PROJECT REPORT Submitted by G. SHRIKANTH (21904106079) KAUSHIK SUBRAMANIAN (21904106043) in partial fulfillment for the award of the degree of BACHELOR OF ENGINEERING In ELECTRONICS AND COMMUNICATION ENGINEERING SRI VENKATESWARA COLLEGE OF ENGINEERING, SRIPERUMBUDUR ANNA UNIVERSITY: CHENNAI 600 025 APRIL 2008 1 ANNA UNIVERSITY: CHENNAI 600 025 BONAFIDE CERTIFICATE Certified that this project report “IMPLEMENTATION OF FPGA-BASED OBJECT TRACKING ALGORITHM” is the bonafide work of “KAUSHIK SUBRAMANIAN (21904106043) AND G. SHRIKANTH (21904106079)” who carried out the project work under my supervision . SIGNATURE SIGNATURE Prof. R. Narayanan Mr. N. Venkateswaran Head of the Department SUPERVISOR Department of Electronics and Assistant Professor Communication Engineering Department of Electronics and Sri Venkateswara College of Communication Engineering Engineering, Pennalur, Sri Venkateswara College of Sriperumbudur - 602105 Engineering, Pennalur, Sriperumbudur - 602105 EXTERNAL INTERNAL EXAMINAR EXAMINAR 2 ACKNOWLEDGEMENT We are personally indebted to a number of people who gave us their useful insights to aid in our overall progress for this project. A complete acknowledgement would therefore be encyclopedic. First of all, we would like to give our deepest gratitude to our parents for permitting us to take up this course. Our sincere thanks and heartfelt sense of gratitude goes to our respected Principal, Dr. R. Ramachandran for all his efforts and administration in educating us in his premiere institution. We take this opportunity to also thank our Head of the Department, Prof. R. Narayanan for his encouragement throughout the project. We would like to express our gratitude to our Internal Coordinator, Prof. Ganesh Vaidyanathan for his commendable support and encouragement for the completion of our project with perfection.
    [Show full text]
  • Hardware Acceleration
    CprE 488 – Embedded Systems Design Lecture 8 – Hardware Acceleration Joseph Zambreno Electrical and Computer Engineering Iowa State University www.ece.iastate.edu/~zambreno rcl.ece.iastate.edu First, solve the problem. Then, write the code. – John Johnson Motivation: Moore’s Law • Every two years: – Double the number of transistors – Build higher performance general-purpose processors • Make the transistors available to the masses • Increase performance (1.8×↑) • Lower the cost of computing (1.8×↓) • Sounds great, what’s the catch? Gordon Moore Zambreno, Spring 2017 © ISU CprE 488 (Hardware Acceleration) Lect-08.2 Motivation: Moore’s Law (cont.) • The “catch” – powering the transistors without melting the chip! 10,000,000,000 2,200,000,000 1,000,000,000 Chip Transistor 100,000,000 Count 10,000,000 1,000,000 100,000 10,000 2300 1,000 130W 100 10 0.5W 1 0 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 Zambreno, Spring 2017 © ISU CprE 488 (Hardware Acceleration) Lect-08.3 Motivation: Dennard Scaling • As transistors get smaller their power density stays constant Transistor: 2D Voltage-Controlled Switch Dimensions Voltage ×0.7 Doping Robert Dennard Concentrations Area 0.5×↓ Capacitance 0.7×↓ Frequency 1.4×↑ Power = Capacitance × Frequency × Voltage2 Power 0.5×↓ Zambreno, Spring 2017 © ISU CprE 488 (Hardware Acceleration) Lect-08.4 Motivation Dennard Scaling (cont.) • In mid 2000s, Dennard scaling “broke” Transistor: 2D Voltage-Controlled Switch Dimensions Voltage ×0.7 Doping Concentrations Area 0.5×↓ Capacitance 0.7×↓ Frequency 1.4×↑ Power
    [Show full text]