Hardware Acceleration of Fully Quantized BERT for Efficient

Total Page:16

File Type:pdf, Size:1020Kb

Hardware Acceleration of Fully Quantized BERT for Efficient Hardware Acceleration of Fully Quantized BERT for Efficient Natural Language Processing Zejian Liu1;2, Gang Li1, Jian Cheng1;2 1National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2School of Future Technology, University of Chinese Academy of Sciences Beijing, China [email protected], fgang.li, [email protected] Abstract—BERT is the most recent Transformer-based model that achieves state-of-the-art performance in various NLP tasks. In this paper, we investigate the hardware acceleration of BERT Task Layer on FPGA for edge computing. To tackle the issue of huge Add & LN computational complexity and memory footprint, we propose to Linear fully quantize the BERT (FQ-BERT), including weights, acti- FFN2 MatMul vations, softmax, layer normalization, and all the intermediate Concat results. Experiments demonstrate that the FQ-BERT can achieve FFN1 & GeLU Softmax ×N 7.94×compression for weights with negligible performance loss. Add & LN Scaled Dot-Product We then propose an accelerator tailored for the FQ-BERT and Attention Scale evaluate on Xilinx ZCU102 and ZCU111 FPGA. It can achieve a Multi-head performance-per-watt of 3.18 fps/W, which is 28.91× and 12.72× Self-attention Linear Linear Linear MatMul over Intel(R) Core(TM) i7-8700 CPU and NVIDIA K80 GPU, respectively. LN 푸 퐾 푉 푄 퐾 푉 Embedding I. INTRODUCTION In recent years, transformer-based models [1] have been ubiquitous in various natural language processing (NLP) ap- Fig. 1. Left: the overall BERT architecture; middle: the architecture of plications, including machine translation [2], question answer- multi-head self-attention module; right: the architecture of scaled dot-product ing and document analysis [3]. Compared with traditional attention layer. recurrent neural network (RNN) and long short-term memory (LSTM) [4], transformer-based model uses multi-head self- results with integer or fixed-point format. Then we present an attention mechanism to encode and decode the information accelerator tailored for the proposed FQ-BERT and evaluate of input sentence to achieve better representative ability, and on two separate FPGAs. Experiments demonstrate that the the Bidirectional Encoder Representations from Transformers proposed accelerator outperforms CPU and GPU by a large (BERT) [5] is the state-of-the-art transformer-based model. margin in terms of performance-per-watt with less than 1% The architecture of BERT is depicted in Figure1. Different accuracy degradation on SST-2 dataset. The main contributions from the original transformer, BERT only consists of stacked of this paper can be summarized as follows: encoder layers as it focuses on learning language represen- tations, which is similar to convolutional neural network for • We propose an efficient FPGA-based BERT accelerator via computer vision. The building block of self-attention is essen- algorithm/hardware co-design. On the algorithm side, we tially matrix-matrix multiplication, which is suitable for mas- propose a hardware-friendly FQ-BERT that represents all sive parallel computing, therefore high-performance GPUs are parameters and intermediate results with integer or fixed- preferred to accelerate the training and inference in the cloud. point data type. On the hardware side, to support different arXiv:2103.02800v1 [cs.AR] 4 Mar 2021 However, the vast computational complexity (>20GFLOPs) operations during inference, we present an accelerator and memory requirement (>320MB floating-point parameters) with dot-product based PE and bit-level reconfigurable of BERT poses a significant challenge for resource-constraint multiplier for 8/4-bit and 8/8-bit multiplication at run-time. platforms, which hinders the applications on edge devices. In this paper, we investigate the acceleration of BERT • We evaluate 8/4-bit FQ-BERT on SST-2 and MNLI on resource-constraint FPGAs through algorithm/hardware co- dataset, which achieve a compression ratio of 7.94×, design. Specifically, to reduce memory footprint, we adopt with 0.7% and 3.5% accuracy drop respectively. Then we quantization to compress the original model. Different from implement accelerator on Xilinx ZCU102 and ZCU111 Q8BERT [6] and Q-BERT [7], which only quantize part MPSoC. Experiments demonstrate that our accelerator can of parameters, we propose a fully quantized BERT (FQ- achieve a performance-per-watt of 3.18 fps/W, which is BERT). FQ-BERT represents weights, activations, scale factors, 28.91× and 12.72× over Intel(R) Core(TM) i7-8700 CPU softmax, layer normalization, and all the other intermediate and NVIDIA K80 GPU, respectively. Weight Buf Controller Bias Buf Scale Buf buf buf CPU · PU_H K Attn ·· PU_1 W PU_0 Q · V ·· PE_N Input/ PE_1 Q Buf Output PE_0 Off-chip K Buf Buf BIM Accu Psum Buf Memory V Buf X << X buf buf Output Output Buf Quant Format Change Format Change Attn Buf Layer Norm LN core Softmax Core Param Buf Fig. 2. The overall architecture of the proposed accelerator for fully quantized BERT. II. FULLY QUANTIZED BERT 32 8 6 4 2 32 8 6 4 2 95 90 92.32 92.32 92.09 84.19 84.19 91.86 91.74 83.51 91.63 83.21 83.11 82.89 82.8 91.28 79.91 In this section, we introduce the methods and tricks that we 90 89.33 80 used for quantizing parameters (weights, activations, biases and 71.9 85 70 others) of the original BERT to obtain FQ-BERT. 83.26 80 60 Accuracy Accuracy 77.64 75 50 48.58 A. Weights and Activations 70 40 CLIP NO_CLIP CLIP NO_CLIP To reduce the computational complexity and memory foot- (a) SST-2 (b) MNLI print, we adopt quantization to compress the BERT. In this work, we use symmetric linear quantization strategy for both Fig. 3. Impact of different quantization bitwidth of weights on accuracy. weights and activations, because symmetric quantization is more hardware friendly for the lack of zero-point. Specifically, We use exponential moving average (EMA) to collect the for each element x of an input tensor, the k-bit quantization statistics of activations to determine scaling factor during function is: inference: 2k−1 − 1 s = scale(A; k) = (3) xc = clamp(x; MIN; MAX) a EMA(max(jAj)) s = scale(xc; k) B. Biases and Others xI = bxc × se (1) Different from previous work that only weights and activa- x = x =s q I tions are quantized, our goal is to obtain a hardware-friendly clamp(x; a; b) = min(max(x; a); b) BERT that can facilitate the deployment on hardware and improve the utilization of computing units. To this end, we Figure3 shows the quantization bitwidth of weights and further quantize all the biases to 32-bit integers with the help corresponding accuracy on two document clssification datasets. of weights’ and activations’ scaling factors: It can be seen that when the bitwidth of weights is lower biasq = bbias × sbiase=sbias than 4, the classification accuracy drops dramatically, which (4) indicates that 4-bit quantization for weights is a reasonable sbias = sa × sw trade-off to maintain accuracy and considerable compression As weights, activations and biases are all quantized, the ratio. MIN and MAX are clip thresholds, which need to be calculation of the output can be rewritten as: carefully tuned during training. Its impact on performance is X also depicted in Figure3. Note that for symmetric quantization, yI = by × sye = ( aI × wI + bI ) × sf (5) MIN = −MAX. s is a scaling factor, which is determined by sy sf = the value of weights or activations and the quantization bitwidth sa × sw k. For weights, the scaling factor is calculated according to: where sa, sw and sy are quantized to 8-bit values, and sf is a 32-bit integer. In addition, we also quantize the numerator and 2k−1 − 1 output of softmax module, and parameters of layer normaliza- s = scale(W; k) = (2) w max(jW j) tion to 8-bit fixed-point values. Load 푿ퟎ 푾푸 푾푲 푾푽 푾풔 푾풇풇풏ퟏ 푾풇풇풏ퟐ 푾푸 풔ퟎ 풔ퟎ 8b 8b 푸 푲 푽 푻 퐬 풇풇풏ퟏ 풇풇풏ퟐ << 푿푾 푿푾 푿푾 푸푲 푨풕풕. 푽 푶푨퐖 푶푳푾 푶풇ퟏ푾 4b X 4b X Com. 풔ퟏ + << 풔ퟏ + 푺풐풇풕풎풂풙 푨풅풅&푳푵 푨풅풅&푳푵 8b 8b 푶 푶 ퟏ 4b X 4b X Save 푸 푲 푽 푨풕풕. 푶푨 푶푺 푶푳 풇ퟏ 풇ퟐ 푿 풔 ퟐ + 풔ퟐ + 8b 8b << 4b X 4b X 풔 Fig. 5. The dataflow of proposed accelerator. ퟑ + 풔ퟑ + 8b 8b 4b X 4b X (a) Type A (b) Type B sums this input and previous data and send the results to the Psum Buf. When the calculation is completed, the results will Fig. 4. Two type of BIM. M = 4. be sent to quantization module to get final value incorporating with biases and scaling factors. Since the quantization process III. ACCELERATOR spends more than one cycle to complete, the Psum Buf is double buffered to ensure the calculation can be pipelined. A. System Overview Softmax Core. BERT utilizes the softmax function to con- The overall architecture of our proposed accelerator is shown vert QKT to attention matrix, the expression of softmax is: in Figure2. It can be divided into two parts: the software exp(x ) program on CPU and the hardware accelerator on FPGA. Softmax(x) = i (6) i P exp(x ) As shown in Figure1, the whole model consists of embed- j j ding layers, encoder layers and task-specific layers. Different whele x is a row vector of QKT matrix. As the exponen- from encoder layers, the computation cost of other layers is tial function exp(xi) is very expensive in terms of resource small, but the memory cost is large, so these computations consumption, we use a lookup table instead. However, it is are completed on CPU, and the results are sent to FPGA.
Recommended publications
  • Enabling Hardware Accelerated Video Decode on Intel® Atom™ Processor D2000 and N2000 Series Under Fedora 16
    Enabling Hardware Accelerated Video Decode on Intel® Atom™ Processor D2000 and N2000 Series under Fedora 16 Application Note October 2012 Order Number: 509577-003US INFORMATIONLegal Lines and Disclaimers IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. A “Mission Critical Application” is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined”.
    [Show full text]
  • Moore's Law Motivation
    Motivation: Moore’s Law • Every two years: – Double the number of transistors CprE 488 – Embedded Systems Design – Build higher performance general-purpose processors Lecture 8 – Hardware Acceleration • Make the transistors available to the masses • Increase performance (1.8×↑) Joseph Zambreno • Lower the cost of computing (1.8×↓) Electrical and Computer Engineering Iowa State University • Sounds great, what’s the catch? www.ece.iastate.edu/~zambreno rcl.ece.iastate.edu Gordon Moore First, solve the problem. Then, write the code. – John Johnson Zambreno, Spring 2017 © ISU CprE 488 (Hardware Acceleration) Lect-08.2 Motivation: Moore’s Law (cont.) Motivation: Dennard Scaling • The “catch” – powering the transistors without • As transistors get smaller their power density stays melting the chip! constant 10,000,000,000 2,200,000,000 Transistor: 2D Voltage-Controlled Switch 1,000,000,000 Chip Transistor Dimensions 100,000,000 Count Voltage 10,000,000 ×0.7 1,000,000 Doping Concentrations 100,000 Robert Dennard 10,000 2300 Area 0.5×↓ 1,000 130W 100 Capacitance 0.7×↓ 10 0.5W 1 Frequency 1.4×↑ 0 Power = Capacitance × Frequency × Voltage2 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 Power 0.5×↓ Zambreno, Spring 2017 © ISU CprE 488 (Hardware Acceleration) Lect-08.3 Zambreno, Spring 2017 © ISU CprE 488 (Hardware Acceleration) Lect-08.4 Motivation Dennard Scaling (cont.) Motivation: Dark Silicon • In mid 2000s, Dennard scaling “broke” • Dark silicon – the fraction of transistors that need to be powered off at all times (due to power + thermal
    [Show full text]
  • Use of Hardware Acceleration on PC As of March, 2020
    Use of hardware acceleration on PC As of March, 2020 When displaying images of the camera using hardware acceleration of the PC, required the specifications of PC as folow. "The PC is equipped with specific hardware (CPU*1, GPU, etc.), and device drivers corresponding to those hardware are installed properly." ® *1: The CPU of the PC must support QSV(Intel Quick Sync Video). This information provides test results conducted under our environment and does not guarantee under all conditions. Please note that we used Internet Explorer for downloading at the following description and its procedures may differ if you use another browser software. Images may not be displayed, may be delayed or some of the images may be corrupted as follow, depends on usage conditions, performance and network environment of the PC. - When multiple windows are opened and operated on the PC - When sufficient bandwidth cannot be secured in your network environment - When the network driver of your computer is old (the latest version is recommended) *When the image is not displayed or the image is distorted, it may be improved by refreshing the browser. Applicable models: i-PRO EXTREME series models*2 (H.265/H.264 compatible models and H.265 compatible models) , i-PRO SmartHD series models*2 (H.264 compatible models) *2: You can find if it supports hardware acceleration by existence a setting item of [Drawing method] and [Decoding Options] in [Viewer software (nwcv4Ssetup.exe/ nwcv5Ssetup.exe)] of the setting menu of the camera. Items of the function that utilize PC hardware acceleration (Click the items as follow to go to each setting procedures.) 1.About checking if it supports "QSV" 2.
    [Show full text]
  • NVIDIA Ampere GA102 GPU Architecture Whitepaper
    NVIDIA AMPERE GA102 GPU ARCHITECTURE Second-Generation RTX Updated with NVIDIA RTX A6000 and NVIDIA A40 Information V2.0 Table of Contents Introduction 5 GA102 Key Features 7 2x FP32 Processing 7 Second-Generation RT Core 7 Third-Generation Tensor Cores 8 GDDR6X and GDDR6 Memory 8 Third-Generation NVLink® 8 PCIe Gen 4 9 Ampere GPU Architecture In-Depth 10 GPC, TPC, and SM High-Level Architecture 10 ROP Optimizations 11 GA10x SM Architecture 11 2x FP32 Throughput 12 Larger and Faster Unified Shared Memory and L1 Data Cache 13 Performance Per Watt 16 Second-Generation Ray Tracing Engine in GA10x GPUs 17 Ampere Architecture RTX Processors in Action 19 GA10x GPU Hardware Acceleration for Ray-Traced Motion Blur 20 Third-Generation Tensor Cores in GA10x GPUs 24 Comparison of Turing vs GA10x GPU Tensor Cores 24 NVIDIA Ampere Architecture Tensor Cores Support New DL Data Types 26 Fine-Grained Structured Sparsity 26 NVIDIA DLSS 8K 28 GDDR6X Memory 30 RTX IO 32 Introducing NVIDIA RTX IO 33 How NVIDIA RTX IO Works 34 Display and Video Engine 38 DisplayPort 1.4a with DSC 1.2a 38 HDMI 2.1 with DSC 1.2a 38 Fifth Generation NVDEC - Hardware-Accelerated Video Decoding 39 AV1 Hardware Decode 40 Seventh Generation NVENC - Hardware-Accelerated Video Encoding 40 NVIDIA Ampere GA102 GPU Architecture ii Conclusion 42 Appendix A - Additional GeForce GA10x GPU Specifications 44 GeForce RTX 3090 44 GeForce RTX 3070 46 Appendix B - New Memory Error Detection and Replay (EDR) Technology 49 Appendix C - RTX A6000 GPU Perf ormance 50 List of Figures Figure 1.
    [Show full text]
  • Research Into Computer Hardware Acceleration of Data Reduction and SVMS
    University of Rhode Island DigitalCommons@URI Open Access Dissertations 2016 Research Into Computer Hardware Acceleration of Data Reduction and SVMS Jason Kane University of Rhode Island, [email protected] Follow this and additional works at: https://digitalcommons.uri.edu/oa_diss Recommended Citation Kane, Jason, "Research Into Computer Hardware Acceleration of Data Reduction and SVMS" (2016). Open Access Dissertations. Paper 428. https://digitalcommons.uri.edu/oa_diss/428 This Dissertation is brought to you for free and open access by DigitalCommons@URI. It has been accepted for inclusion in Open Access Dissertations by an authorized administrator of DigitalCommons@URI. For more information, please contact [email protected]. RESEARCH INTO COMPUTER HARDWARE ACCELERATION OF DATA REDUCTION AND SVMS BY JASON KANE A DISSERTATION SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY IN COMPUTER ENGINEERING UNIVERSITY OF RHODE ISLAND 2016 DOCTOR OF PHILOSOPHY DISSERTATION OF JASON KANE APPROVED: Dissertation Committee: Major Professor Qing Yang Haibo He Joan Peckham Nasser H. Zawia DEAN OF THE GRADUATE SCHOOL UNIVERSITY OF RHODE ISLAND 2016 ABSTRACT Yearly increases in computer performance have diminished as of late, mostly due to the inability of transistors, the building blocks of computers, to deliver the same rate of performance seen in the 1980’s and 90’s. Shifting away from traditional CPU design, accelerator architectures have been shown to offer a potentially untapped solution. These architectures implement unique, custom hardware to increase the speed of certain tasking, such as graphics processing. The studies undertaken for this dissertation examine the ability of unique accelerator hardware to provide improved power and speed performance over traditional means, with an emphasis on classification tasking.
    [Show full text]
  • AN5072: Introduction to Embedded Graphics – Application Note
    Freescale Semiconductor Document Number: AN5072 Application Note Rev 0, 02/2015 Introduction to Embedded Graphics with Freescale Devices by: Luis Olea and Ioseph Martinez Contents 1 Introduction 1 Introduction................................................................1 The purpose of this application note is to explain the basic 1.1 Graphics basic concepts.............. ...................1 concepts and requirements of embedded graphics applications, 2 Freescale graphics devices.............. ..........................7 helping software and hardware engineers easily understand the fundamental concepts behind color display technology, which 2.1 MPC5606S.....................................................7 is being expected by consumers more and more in varied 2.2 MPC5645S.....................................................8 applications from automotive to industrial markets. 2.3 Vybrid.............................................................8 This section helps the reader to familiarize with some of the most used concepts when working with graphics. Knowing 2.4 MAC57Dxx....................... ............................ 8 these definitions is basic for any developer that wants to get 2.5 Kinetis K70......................... ...........................8 started with embedded graphics systems, and these concepts apply to any system that handles graphics irrespective of the 2.6 QorIQ LS1021A................... ......................... 9 hardware used. 2.7 i.MX family........................ ........................... 9 2.8 Quick
    [Show full text]
  • 3D Graphics Video
    NVIDIA GEFORCE GTX 200 GPU DATASHEET 3D Graphics nd · 2 generation unified architecture 1 · Up to 240 processor cores · Next generation geometry shading and stream out performance · Next generation dual issue · Next generation HW scheduler · NVIDIA GigaThread™ technology with increased number of threads · 2x registers · Full support for Microsoft DirectX 10.0 Shader Model 4.0 and OpenGL 2.1 APIs · Full 128-bit floating point precision through the entire rendering pipeline · Lumenex™ Engine · 16× full screen antialiasing · Transparent multisampling and transparent supersampling · 16× angle independent anisotropic filtering · 128-bit floating point high dynamic-range (HDR) lighting with antialiasing · 32-bit per component floating point texture filtering and blending · Full speed frame buffer blending · Advanced lossless compression algorithms for color, texture, and z-data · Support for normal map compression · Z-cull · Early-Z Video 2 · PureVideo HD® Technology · Dedicated on-chip video processor · High-definition H.264, VC-1, MPEG2, and WMV9 decode acceleration · Blu-ray dual-stream hardware acceleration (supporting HD picture-in-picture playback) 3 · HDCP capable up to 2560×1600 resolution · Advanced spatial-temporal de-interlacing · Noise Reduction · Edge Enhancement · Bad Edit Correction · Inverse telecine (2:2 and 3:2 pull-down correction) · High-quality scaling · Video color correction · Microsoft Video Mixing Renderer (VMR) support · Dynamic Contrast and Tone Enhancements NVIDIA GEFORCE GTX 200 GPU DATASHEET NVIDIA Technology 4
    [Show full text]
  • Hardware Acceleration in the Context of Motion Control for Autonomous Vehicles
    DEGREE PROJECT IN ELECTRICAL ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2020 Hardware Acceleration in the Context of Motion Control for Autonomous Vehicles JELIN LESLIN KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Hardware Acceleration in the Context of Motion Control for Autonomous Systems JELIN LESLIN Master’s Programme, Embedded Systems, 120 credits Date: November 16, 2020 Supervisor: Masoumeh (Azin) Ebrahimi Examiner: Johnny Öberg Industry Supervisor: Markhusson Christoffer & Harikrishnan Girishkasthuri School of Electrical Engineering and Computer Science Host company: Volvo Trucks Swedish title: Hårdvaruacceleration i samband med rörelsekontroll för autonoma system Hardware Acceleration in the Context of Motion Control for Autonomous Systems / Hårdvaruacceleration i samband med rörelsekontroll för autonoma system © 2020 Jelin Leslin | i Abstract State estimation filters are computationally intensive blocks used to calculate uncertain/unknown state values from noisy/not available sensor inputs in any autonomous systems. The inputs to the actuators depend on these filter’s output and thus the scheduling of filter has to be at very small time intervals. The aim of this thesis is to investigate the possibility of using hardware accelerators to perform this computation. To make a comparative study, 3 filters that predicts 4, 8 and 16 state information was developed and implemented in Arm real time and application purpose CPU, NVIDIA Quadro and Turing GPU, and Xilinx FPGA programmable logic. The execution, memory transfer time, and the total developement time to realise the logic in CPU, GPU and FPGA is discussed. The CUDA developement environment was used for the GPU implementation and Vivado HLS with SDSoc environ- ment was used for the FPGA implementation.
    [Show full text]
  • Hardware Acceleration of Electronic Design Automation
    HARDWARE ACCELERATION OF ELECTRONIC DESIGN AUTOMATION ALGORITHMS A Dissertation by KANUPRIYA GULATI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY December 2009 Major Subject: Computer Engineering HARDWARE ACCELERATION OF ELECTRONIC DESIGN AUTOMATION ALGORITHMS A Dissertation by KANUPRIYA GULATI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Chair of Committee, Sunil P. Khatri Committee Members, Peng Li Jim Ji Duncan M. Walker Desmond A. Kirkpatrick Head of Department, Costas N. Georghiades December 2009 Major Subject: Computer Engineering iii ABSTRACT Hardware Acceleration of Electronic Design Automation Algorithms. (December 2009) Kanupriya Gulati, B.E., Delhi College of Engineering, New Delhi, India; M.S., Texas A&M University Chair of Advisory Committee: Dr. Sunil P. Khatri With the advances in very large scale integration (VLSI) technology, hardware is going parallel. Software, which was traditionally designed to execute on single core microproces- sors, now faces the tough challenge of taking advantage of this parallelism, made available by the scaling of hardware. The work presented in this dissertation studies the accelera- tion of electronic design automation (EDA) software on several hardware platforms such as custom integrated circuits (ICs), field programmable gate arrays (FPGAs) and graphics processors. This dissertation concentrates on a subset of EDA algorithms which are heav- ily used in the VLSI design flow, and also have varying degrees of inherent parallelism in them. In particular, Boolean satisfiability, Monte Carlo based statistical static timing analysis, circuit simulation, fault simulation and fault table generation are explored.
    [Show full text]
  • Hardware Acceleration for Machine Learning
    2017 IEEE Computer Society Annual Symposium on VLSI Hardware Acceleration for Machine Learning Ruizhe Zhao, Wayne Luk, Xinyu Niu Huifeng Shi, Haitao Wang Department of Computing, State Key Laboratory of Space-Ground Imperial College London, Integrated Information Technology, United Kingdom Beijing, China Abstract—This paper presents an approach to enhance the Evaluations show that our approach is applicable to accel- performance of machine learning applications based on hard- erating different machine learning algorithms and can reach ware acceleration. This approach is based on parameterised competitive performance. architectures designed for Convolutional Neural Network (CNN) This paper has four main sections. Section 2 presents and Support Vector Machine (SVM), and the associated design background in CNN and SVM, and also related research. flow common to both. This approach is illustrated by two case Section 3 illustrates the basic architectures that are pa- studies including object detection and satellite data analysis. rameterised for machine learning tasks. Section 4 proposes The potential of the proposed approach is presented. a common design flow for given architectures. Section 5 shows two case studies of our approach. 1. Introduction 2. Background Machine Learning is a thriving area in the field of Artificial Intelligence, which involves algorithms that can 2.1. Convolutional Neural Networks learn and predict through, for example, building models from given datasets for training. In recent years, many Many machine learning designs are based on deep learn- machine learning techniques, such as Convolution Neural ing networks, which involve an architecture with neurons Network (CNN) and Support Vector Machine (SVM), have grouped into layers by their functionalities, and multiple shown promise in many application domains, such as image layers organised to form a deep sequential structure.
    [Show full text]
  • VMWARE HORIZON with NVIDIA GRID Delivering Secure, Immersive Graphics from the Cloud
    SOLUTION BRIEF VMWARE HORIZON WITH NVIDIA GRID Delivering Secure, Immersive Graphics from the Cloud Accelerated Desktop Virtualization – powered by the Graphics- Accelerated Data Center Organizations are increasingly seeking greater business agility, supporting geographically dispersed teams, and the need for secure, real-time collaboration. Teams within manufacturing, architecture, education, and healthcare environments need the ability to interact with large graphics-intensive datasets that must be securely manipulated in 3D, in real-time. At the same time, workers in every industry need a flawless user experience when accessing virtual workspaces, including Windows 10 and basic office productivity apps that demand increasingly greater graphics performance than ever before. Desktop virtualization presents an opportunity to un-tether the user workspace from the confines of hardware and location, for both professional graphics users as well as knowledge workers. Graphics Processing Units (GPUs) offer greater performance and user experience within virtual environments, while relieving virtual machine CPU of the graphics processing burden. However, a tradeoff has AT A GLANCE traditionally existed between deploying technologies that cost-effectively share GPU resources across many virtual desktops, offering limited performance uplift, VMware Horizon® with NVIDIA™ GRID™ delivers secure, immersive versus those that allocate a single GPU to a single virtual desktop, incurring a 3D graphics from the cloud, that’s significant per-user cost. easily
    [Show full text]
  • Powervr GPU Accelerated Augmented Reality
    PowerVR GPU Accelerated Augmented Reality February 2012 Company Overview . Leading Semiconductor IP Supplier . PowerVR™ graphics, video, display processing . Ensigma™ receivers and communications processors . Meta™ processors – SoC centric real-time, DSP, Linux . Licensees: Leading Semis and OEMs Selection of POWERVR Graphics licensees on the left Solution Centric IP . #3 Silicon Design IP provider . Strategic product division: PURE . PURE digital radio, internet connected audio . Leading deployment of Flow technology SUNPLUS . Established technology powerhouse . Founded in 1985 . On London Stock Exchange since 1994 . Employees: more than 1000 worldwide . >600m devices shipped UK Headquarters R&D Sales 2 © Imagination Technologies Ltd. Augmented Reality and GPUs . GPUs combined with Khronos APIs offer hardware acceleration opportunities for Augmented Reality (AR) applications: . 3D Rendering using OpenGL ES 1.1/2.0 APIs . High quality 3D graphics rendering . Can then be blended into the real world camera capture . Camera Image Texture Streaming using EGL/OpenGL ES . Process camera images as textures . Enables seamless 3D & Reality integration with minimal CPU loading . Camera Image Processing using OpenCL Embedded Profile . Parallel compute highly efficient on GPUs . Perfect match for majority of AR Video Processing and Tracking Algorithms . PowerVR SGX S5, SGX S5XT and S6 GPUs enable AR Hardware Acceleration 3 © Imagination Technologies Ltd. Video Texture Streaming for AR (1) . Efficient integration of Camera images into the 3D rendering flow is essential for good performance and efficiency in AR Applications . Overlay – Simple combination using Alpha Blending in the Display Controller YUV Video Data Camera Display Controller Textures GPU RGBA Image Data LCD Screen . Critical to avoid CPU Copy and/or CPU based colour space conversions .
    [Show full text]