GPTPU: Accelerating Applications Using Edge Tensor Processing Units Kuan-Chieh Hsu and Hung-Wei Tseng University of California, Riverside {Khsu037, Htseng}@Ucr.Edu

GPTPU: Accelerating Applications Using Edge Tensor Processing Units Kuan-Chieh Hsu and Hung-Wei Tseng University of California, Riverside {Khsu037, Htseng}@Ucr.Edu

GPTPU: Accelerating Applications using Edge Tensor Processing Units Kuan-Chieh Hsu and Hung-Wei Tseng University of California, Riverside {khsu037, htseng}@ucr.edu This paper is a pre-print of a paper in the 2021 SC, the Interna- Two decades ago, graphics processing units (GPUs) were just tional Conference for High Performance Computing, Networking, domain-specific accelerators used for shading and rendering. But Storage and Analysis. Please refer to the conference proceedings intensive research into high-performance algorithms, architectures, for the most complete version. systems, and compilers [3–12] and the availability of frameworks like CUDA [13] and OpenCL [14], have revolutionized GPUs and ABSTRACT transformed them into high-performance, general-purpose vector Neural network (NN) accelerators have been integrated into a wide- processors. We expect a similar revolution to take place with NN spectrum of computer systems to accommodate the rapidly growing accelerators—a revolution that will create general-purpose matrix demands for artificial intelligence (AI) and machine learning (ML) processors for a broader spectrum of applications. However, de- applications. NN accelerators share the idea of providing native mocratizing these NN accelerators for non-AI/ML workloads will hardware support for operations on multidimensional tensor data. require the system framework and the programmer to tackle the Therefore, NN accelerators are theoretically tensor processors that following issues: can improve system performance for any problem that uses ten- (1) The microarchitectures and instructions of NN accelerators sors as inputs/outputs. Unfortunately, commercially available NN are optimized for NN workloads, instead of general matrix/tensor accelerators only expose computation capabilities through AI/ML- algebra. These auxiliary NN accelerators focus on latency per infer- specific interfaces. Furthermore, NN accelerators reveal very few ence, but not yet on delivering computation throughput comparable hardware design details, so applications cannot easily leverage the to GPUs. Naively mapping conventional matrix/tensor algorithms tensor operations NN accelerators provide. to AI/ML operations will lead to sub-optimal performance. (2) Be- This paper introduces General-Purpose Computing on Edge Ten- cause many AI/ML applications are error tolerant, NN accelerators sor Processing Units (GPETPU), an open-source, open-architecture typically trade accuracy for area/energy-efficiency; when such a framework that allows the developer and research communities to trade-off produces undesirable results, additional mechanisms are discover opportunities that NN accelerators enable for applications. needed to make adjustments. (3) The programming interfaces of GPETPU includes a powerful programming interface with efficient existing NN accelerators are specialized for developling AI/ML ap- runtime system-level support—similar to that of CUDA/OpenCL plications. Existing frameworks expose very few details about the in GPGPU computing—to bridge the gap between application de- hardware/software interfaces of NN accelerators, so programmers mands and mismatched hardware/software interfaces. are unable to customize computation and the application can suf- We built GPETPU machine uses Edge Tensor Processing Units fer from significant performance overhead due to adjusting the (Edge TPUs), which are widely available and representative of many parameters/data bound to the supported ML models. (4) Tensor commercial NN accelerators. We identified several novel use cases algorithms are traditionally time-consuming, so programmers have and revisited the algorithms. By leveraging the underlying Edge tailored compute kernels in favor of scalar/vector processing. Such TPUs to perform tensor-algorithm-based compute kernels, our re- tailoring makes applications unable to take advantage of tensor sults reveal that GPETPU can achieve a 2.46× speedup over high- operators without revisiting algorithms. end CPUs and reduce energy consumption by 40%. This paper bridges the gap between general-purpose programma- bility and domain-specific NN accelerators by presenting a full-stack arXiv:2107.05473v2 [cs.DC] 13 Jul 2021 1 INTRODUCTION system architecture that enables General-Purpose Computing on Edge TPUs (GPETPU). GPETPU tackles all the aforementioned The demand for artificial intelligence (AI) and machine learning challenges through providing a programming interface, a runtime (ML) applications has exploded in recent years, and the increase in system, compiler and libraries. With the system this paper pro- AI/ML workloads has led to significant research advances in neural poses, programmers will be able to explore the enormous potential network (NN) accelerators, including Google’s Edge Tensor Process- of the matrix processing model inherent in Edge TPU, a commer- ing Units (Edge TPUs) [1] and Apple’s Neural Engines [2] that are cially available accelerator that can be part of a system-on-module already presented as auxiliary hardware components in commodity (SOM) or be easily attached to various forms of computer systems. systems. These NN accelerators’ power/energy efficiency is orders- A commercialized Edge TPU can inference ML models at 4 TOPS of-magnitude better than that of conventional vector processors (tera operations per second) with only 2 W of power consump- (e.g., Graphics Processing Units [GPUs]) for the same workloads. tion. The design that GPETPU demonstrates can also work for NN Despite the differences among microarchitectures, most NN acceler- accelerators sharing similar architectures. ators are essentially matrix processors that take tensors/matrices as GPETPU provides a programming framework, including an Edge inputs, generate tensors/matrices as outputs, and provide operators TPU-specific C/C++ extension, OpenCtpu and a runtime system. that facilitate NN computations. Kuan-Chieh Hsu and Hung-Wei Tseng University of California, Riverside To appear in SC’21, November 2021, St. Louis, MO, USA {khsu037, htseng}@ucr.edu GPETPU offloads programmers from directly interacting with the the GPETPU platform (to be released during the conference). (6) The accelerator’s hardware to focus on the design of tensor-based al- paper shows the performance and energy-consumption benefits of gorithms for applications. OpenCtpu achieves more programming the prototype GPETPU system. flexibility than existing domain-specific interfaces by exposing high- level tensor/matrix algebra operators (e.g., matrix multiply) and 2 BACKGROUND low-level accelerator operators (e.g., convolution and multiplication- This section briefly highlights TPU architectures and introduces accumulation) to the programmer, so programmers can design ar- alternative NN accelerators where GPETPU can potentially work. bitrary tensor algorithms or customize operators that cannot be easily achieved using domain-specific languages. 2.1 TPU Architecture The core of the GPETPU runtime system is our proposed Ten- As most NN applications take matrix/tensor inputs and iteratively sorizer, a module that dynamically evaluates input data and trans- update parameters/weights from previous outcomes, the TPU mi- forms data into ML models that the underlying Edge TPUs or other croarchitecture accelerates NN tasks for modern ML applications NN accelerators can efficiently perform inference operations on. by creating a systolic array that performs operations on the units of Tensorizer handles quantization and calibration of input datasets matrices/tensors. For inferencing tasks, the TPU treats one of the and computation results, thereby minimizing the impact of limited input matrices as the trained model and the other as the samples precision on NN accelerators. The GPETPU runtime also sched- to predict or classify. Taking matrices/tensors as the default inputs ules computation tasks and distributes prepared data to available and outputs makes the TPU architecture and its corresponding exe- NN accelerators in a manner that minimizes the data movement cution model fundamentally different from conventional CPU/GPU overhead. architectures that compute on scalar/vector data pairs. TPUs also Despite the Edge TPU’s promising energy efficiency and recently incorporate large on-chip memory to hold the intermediate results open-sourced C++ API, documentation is vague regarding the Edge that later iterations reuse. Unlike conventional processors, TPUs do TPU’s hardware/software interface and architecture. This lack of not contain on-chip instruction caches but simply use a CISC-style detail complicates the design of systems that fully exploit the Edge instruction-set architecture and rely on the host program to issue TPU’s capabilities. To develop GPETPU, we measured the perfor- instructions through the system interconnect. And whereas conven- mance of available Edge TPU operators, reverse-engineered the tional processors aim for precise computation results, TPU matrix Edge TPU hardware/software interface for data exchanges, and an- units only support operations on a limited level of precision that is alyzed the Edge TPU architecture. We applied our understanding of sufficient to satisfy the demands of modern ML applications while the Edge TPU to optimize the backend runtime system for efficient significantly reducing both TPU costs and energy requirements. task creation and data transformation. We then built a prototype GPETPU system with 8 Edge TPUs to allow concurrent GPETPU 2.2 Edge TPU task execution. We demonstrate

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    14 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us