Enabling Development of Opencl Applications on FPGA Platforms

Enabling Development of Opencl Applications on FPGA Platforms

Enabling Development of OpenCL Applications on FPGA platforms Kavya S Shagrithaya Thesis submitted to the Faculty of the Virginia Polytechnic Institute and State University in partial fulfillment of the requirements for the degree of Master of Science in Computer Engineering Peter M. Athanas, Chair Patrick R. Schaumont Paul E. Plassmann August 6, 2012 Blacksburg, Virginia Keywords: OpenCL, AutoESL, FPGA, Convey, HPC Copyright 2012, Kavya S Shagrithaya Enabling Development of OpenCL Applications on FPGA platforms Kavya S Shagrithaya (ABSTRACT) FPGAs can potentially deliver tremendous acceleration in high-performance server and em- bedded computing applications. Whether used to augment a processor or as a stand-alone device, these reconfigurable architectures are being deployed in a large number of implemen- tations owing to the massive amounts of parallelism offered. At the same time, a significant challenge encountered in their wide-spread acceptance is the laborious efforts required in programming these devices. The increased development time, level of experience needed by the developers, lower turns per day and difficulty involved in faster iterations over designs affect the time-to-market for many solutions. High-level synthesis aims towards increasing the productivity of FPGAs and bringing them within the reach software developers and domain experts. OpenCL is a specification introduced for parallel programming purposes across platforms. Applications written in OpenCL consist of two parts - a host program for initialization and management, and kernels that define the compute intensive tasks. In this thesis, a compilation flow to generate customized application-specific hardware descrip- tions from OpenCL computation kernels is presented. The flow uses Xilinx AutoESL tool to obtain the design specification for compute cores. An architecture provided integrates the cores with memory and host interfaces. The host program in the application is compiled and executed to demonstrate a proof-of-concept implementation towards achieving an end-to-end flow that provides abstraction of hardware at the front-end. Acknowledgements Firstly, I am sincerely thankful to my advisor, Dr. Peter Athanas for providing me an opportunity to be a part of the Configurable Computing Lab. He has been a source of inspiration throughout and has been very encouraging at all times. I would like to express my deep gratitude for his continual support and invaluable guidance without which I could not have progressed in my research. I have learnt a lot from him, in academics and beyond. I would like to thank Dr. Patrick Schaumont and Dr. Paul Plassman for serving as members of my committee. The courses that I took under Dr. Schaumont were a great learning experience and I immensely enjoyed his style of teaching. My parents and my brother have been there for me, through thick and thin. I am very grateful to them for encouraging me towards pursuing graduate studies and supporting me throughout. Special thanks to my father, my role model, for all his words of wisdom. The environment in the lab has always been conducive to learning things the enjoyable way. I would like to thank my lab mates; Tony, for his advice on all work-related matters; Karl, for helping me ramp up in the intial days; Krzysztof, for his help and guidance on this project; Kaiwen, for his inputs towards this work; and the rest of the CCM family, David, Ritesh, Ramakrishna, Keyrun, Xin Xin, Wen Wei, Andrew, Richard, Shaver, Jason, Ryan, Teresa, Abhay and Jacob for making this a truly enriching experience. My heartfelt thanks to Kaushik and Abhinav for their support in curricular and non- curricular aspects; Iccha, Vishwas, Vikas, Shivkumar, Navish, Atashi and Mahesh for the fun times off work; and all my other friends at Blacksburg who have made these two years very memorable for me. I am thankful to God for everything. iii Contents 1 Introduction 1 1.1 Motivation..................................... 2 1.2 Contributions ................................... 3 1.3 ThesisOrganization. .. .. .. .. .. ... .. .. .. .. .. ... .. .. 4 2 Background 5 2.1 High-LevelSynthesis .............................. 5 2.2 OverviewofOpenCL ............................... 8 2.2.1 OpenCLmodels.............................. 9 PlatformModel.............................. 9 ExecutionModel ............................. 9 MemoryModel .............................. 10 ProgrammingModel ........................... 11 2.2.2 OpenCLFramework ........................... 12 2.2.3 OpenCLCProgrammingLanguage . 12 2.3 Relatedwork ................................... 13 2.4 Summary ..................................... 15 3 Approach and Implementation 16 3.1 Approach ..................................... 16 3.1.1 AnIntroductiontoAutoESL . 17 3.2 ImplementationSpecifics . ... 17 iv 3.2.1 OpenCL to AutoESL Source-to-Source Translation . ..... 18 3.2.2 AutoESLSynthesis ............................ 23 3.2.3 IntegrationandMappingonConvey . 23 3.2.4 HostLibrary ............................... 26 3.3 Summary ..................................... 28 4 Results 29 4.1 CaseStudy:VectorAddition . 29 4.1.1 PerformanceandResourceUtilization. .... 31 4.2 CaseStudy: MatrixMultiplication . ..... 33 4.3 ComparisonofMethodologies . ... 36 4.4 ChallengesinUsingOpenCL. 38 4.5 Summary ..................................... 38 5 Conclusion 39 5.1 FutureWork.................................... 39 Bibliography 41 AppendixA:DotFileDescriptionforVectorAddition 44 Appendix B: OpenCL API Implementations for Kernel Execution 47 v List of Figures 2.1 Few high-level synthesis tools and their source inputs . ............ 6 2.2 The platform model for OpenCL architecture [12] . ........ 9 2.3 An example of an NDRange index space where N = 2 ............. 10 2.4 OpenCL memory regions and their relation to the executionmodel ..... 11 3.1 Overviewofthedesignflow . 19 3.2 Dot format visualization for the vector addition kernel ............ 21 3.3 SystemarchitectureonConvey . ... 25 4.1 Execution flow for the vector addition application . .......... 30 4.2 Performance results for different vector sizes . .......... 31 4.3 Resource Utilization for vector addition application forasingleAE ..... 33 4.4 Matrixmultiplicationoperation . ...... 33 vi List of Tables 4.1 Performance results for vector addition application for vectors of size 1024 . 31 4.2 Area Estimates for a single core from AutoESL report . ....... 32 4.3 Comparison with SOpenCL and OpenRCL implementations . ....... 37 4.4 Comparison with Altera’s OpenCL for FPGAs program . ..... 37 vii Chapter 1 Introduction An increasing number of application areas today are embracing high performance com- puting (HPC) solutions for their processing needs. With frequency scaling having run its course, conventional processors have given way to various accelerator technologies to meet the computational demands. These range from Graphic Processing Units (GPU), Field Pro- grammable Gate Arrays (FPGA), heterogeneous multicore processors like Cell to hybrid architectures like Convey HC-1. Different classes of applications employ different targets best suited for the problem at hand. Each of the accelerator technologies possess advantages over the other for variant tasks leading to possibilities of a heterogeneous mix of architectures [1–3]. Reconfigurable systems like FPGAs provide promising opportunities for acceleration in many fields due to their inherent flexibility and massive parallel computation capabilities. The availability of a multitude of hardware devices require comprehensive approaches to solu- tion which include the knowledge of underlying architecture along with methods of designing the algorithm. This translates to an increase in implementation effort, high learning curves and architecture aware programming at the developer’s end. In 2004, DARPA launched a project named High Productivity Computing Systems (HPCS) that aimed at providing eco- 1 2 nomically viable high productivity systems[4]. They proposed using “time to solution” as a metric that includes the time to develop a solution as well the time taken to execute it. CPUs and GPUs are programmed in high level languages like C/C++ and CUDA. This level of abstraction enables faster deployment of solutions. In the case of FPGAs, implementations require tedious hardware design and debug which greatly impact the development time. 1.1 Motivation Enormous acceleration can be delivered by FPGAs owing to the flexibility and parallelism provided by the fine grained architecture. However, being able to fully harness this potential presents challenges in terms of programmability. Implementation of applications on FPGAs involve cumbersome RTL programming and manual optimizations. Besides domain expertise and software design skills, developers are required to understand intricate details of hardware design including timing closure, state machine control and cycle accurate implementations. As a result the effort in drastically reducing the execution time translates to a significant increase in the time to develop the solution. Jeff Chase et al implemented a tensor-based real time optical flow algorithm on FPGA and GPU platforms. A fixed-point version was mapped on a Virtex II Pro XC2VP30 FPGA in the Xilinx XUP V2P board and a single- precision floating-point implementation was mapped on an Nvidia Geforce 880 GTX GPU with appropriate design optimizations for each platform. Comparisons were drawn on both implementations on various parameters including performance, power, and productivity. The work presented comparable

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    59 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us