Evaluation of Open-Source Linear Algebra Libraries Targeting ARM and RISC-V Architectures
Total Page:16
File Type:pdf, Size:1020Kb
Proceedings of the Federated Conference on DOI: 10.15439/2020F145 Computer Science and Information Systems pp. 663–672 ISSN 2300-5963 ACSIS, Vol. 21 Evaluation of Open-Source Linear Algebra Libraries targeting ARM and RISC-V Architectures Christian Fibich, Stefan Tauner, Peter Rossler,¨ Martin Horauer Dept. of Electronic Engineering, University of Applied Sciences Technikum Wien Hochst¨ adtpl.¨ 6, 1200 Vienna, Austria {fibich, tauner, roessler, horauer}@technikum-wien.at Abstract—Basic Linear Algebra Subprograms (BLAS) has tation of this specification1. BLAS essentially describes a emerged as a de-facto standard interface for libraries providing programming interface for three levels of algorithms: level 1 linear algebra functionality. The advent of powerful devices for contains vector-vector operations, level 2 defines vector-matrix Internet of Things (IoT) nodes enables the reuse of existing BLAS implementations in these systems. This calls for a discerning operations, and level 3 specifies matrix-matrix operations all evaluation of the properties of these libraries on embedded for single- and double-precision real numbers, as well as for processors. single- and double-precision complex numbers. The reference This work benchmarks and discusses the performance and implementation itself does not contain optimized code – for memory consumption of a wide range of unmodified open-source example, matrix-multiplication is implemented using a simple BLAS libraries. In comparison to related (but partly outdated) publications this evaluation covers the largest set of open- iterative algorithm with three nested loops – it is, however, source BLAS libraries, considers memory consumption as well widely used as a performance baseline for implementers and distinctively focuses on Linux-capable embedded platforms of optimized BLAS libraries. For example, implementations (an ARM-based SoC that contains an SIMD accelerator and taking advantage of Single Instruction Multiple Data (SIMD) one of the first commercial embedded systems based on the hardware extensions provided by modern CPUs as well as emerging RISC-V architecture). Results show that especially for matrix operations and larger problem sizes, optimized BLAS GPUs via interfaces such as CUDA and OpenCL are common. implementations allow for significant performance gains when Since many modern embedded applications rely on al- compared to pure C implementations. Furthermore, the ARM gorithms that mandate complex computations an evaluation platform outperforms the RISC-V incarnation in our selection whether some BLAS implementations can be re-used under of tests. resource constraints imposed by typical embedded platforms Index Terms—Embedded Systems, Basic Linear Algebra Sub- programs, BLAS, Benchmarks, ARM, RISC-V seems in demand. Unlike the usual target systems for BLAS libraries this work evaluates them on comparably small embed- ded systems that display significantly divergent characteristics I. INTRODUCTION due to architectural differences. For example, modern x86 DGE computing – processing sensor data as close to CPUs can retire over 10 instructions per cycle per core under E their origin as possible – is a paradigm to de-centralize ideal circumstances while running at a multi-GHz clock rate processing and storage in order to improve the scalability of and dozens of MB of caches. This is in stark contrast to IoT applications. More potent embedded CPUs allow more embedded CPUs, which are often still non-speculative in- complex processing tasks, for example signal and image order architectures with sub-GHz frequencies. In particular, processing operations as well as inference (and even training) in this paper we evaluate different implementations of the of neural networks. Vector or matrix operations are essential BLAS specification targeting the ARM Cortex-A9 and RISC- building blocks of the digital algorithms that are at the core of V RV64GCSU architectures, respectively. Both share some these applications. This raises the question whether it is viable similarities such as having RISC architectures, comparably to re-use and adapt proven mathematical software libraries low clock frequencies and other attributes often found in written for server and desktop computers for embedded target embedded systems. Both Instruction Set Architectures (ISAs) platforms. support extensions leading to a wide variety of implementa- Netlib, a repository for open-source mathematical software tions. We chose two representative examples for our tests that run by AT&T, Bell Labs, University of Tennessee, and Oak are described in more detail in Section IV. Ridge National Laboratory, both maintains the BLAS spec- Furthermore, it was the intention of the authors to present ification document [1] and provides a reference implemen- a more comprehensive overview and evaluation of existing open-source BLAS libraries, when compared to related work This work has been supported by the Austrian Federal Ministry for Digital that is discussed in Section II. In Section III, the scope of and Economic Affairs (BM:DW) and the National Foundation for Research, this work – the evaluated libraries as well as the benchmark Technology and Development as related to the Josef Ressel Center “Innovative applications and CPU architectures that constitute the basis of Platforms for Electronic-Based Systems” (INES) managed by the Christian Doppler Society (CDG). We thank Kapsch TrafficCom AG for their support of this research. 1https://www.netlib.org/blas/, last visited on 2020-06-27 IEEE Catalog Number: CFP2085N-ART ©2020, PTI 663 664 PROCEEDINGS OF THE FEDCSIS. SOFIA, 2020 these evaluations – is described. Section IV provides details in particular targeting ARM Cortex-A9 and RISC-V. The on the specific build and run-time environments as well as focus on embedded systems ruled out Intel’s Math Kernel employed metrics and measurement methodologies. Finally, Library (MKL) implementation that specifically targets x86 the results of the evaluations are discussed in Section V before architectures. Instead, the performance of ARM’s Ne10 library the paper concludes. was included. Although this is not an implementation of the BLAS specification, it has an overlapping scope. The treatment II. RELATED WORK of memory consumption of the libraries and the inclusion of Due to its widespread use, libraries implementing the BLAS RISC-V sets this work apart from past publications. specification have been compared in the past. For example, [2] In addition to academic literature, several authors of the presents a wrapper for multiple BLAS library implementations libraries evaluated in this work have published their own facilitating their interchangeable use along with an evaluation benchmark results. ATLAS provides timing data for the by way of several benchmarks. Similar evaluations are pro- DGEMM (double precision generic matrix multiplication) BLAS vided by the BLIS framework presented in [3] targeting the function (dimensions between 1000 and 2000) in various performance of several BLAS level 2 and 3 algorithms and library versions and CPU architectures (e.g., Intel, MIPS, comparing them with an optimized ISO C implementation. SPARC)2. The results, however, are provided for up to ATLAS The code generator LGen generates computational kernels version 3.9.5, which is about 8 years old. The Eigen project for some BLAS operations with fixed-sized operands that are provides benchmark results of various level-2 and level-3 optimized for different target architectures and thus improve BLAS functions, as well as more complex functions such over generic implementations that employ varying operand as lower-upper factorization of matrices3. The results date sizes, see [4]. This approach was refined in [5] where the from 2011 and are compared to GotoBLAS (an OpenBLAS targets have been shifted to some high-end embedded proces- predecessor), Intel MKL, and ATLAS 3.8.3. BLIS provides sors. the most recent results, dating from 2020. Results are pro- Finally, BLASFEO (see also Section III-B) is a BLAS im- vided for server- and workstation-class CPUs (i.e., 10s of plementation that is intended to enable executing optimization cores, ARM ThunderX2, Intel SkylakeX, Intel Haswell, AMD algorithms on embedded targets. It is especially optimized EPYC). BLAS level 3 functions are evaluated against Eigen, for widely used embedded application processors (e.g., ARM OpenBLAS, BLASFEO, and MKL. Different sets of results Cortex-A7, -A9, -A15) as well as modern Intel and AMD are available for large4 and small matrices5. architectures (e.g., Intel Haswell, Sandy Bridge). In [6], the Unlike the publications discussed above this work examines authors compare their implementation to multiple other BLAS BLAS libraries on comparably small embedded systems. Pre- libraries. BLASFEO provides its own API that is better suited liminary results of this evaluation were published in [8]. This for small input sizes than standard BLAS as it reduces the work widens the scope of the analysis both by inclusion of overhead of aligning the matrices for internal processing. the RISC-V architecture and the very recent BLASFEO linear However, the standard BLAS API has been implemented algebra library. It is, to the best of our knowledge, the first coequally to the native API and evaluated on Intel and ARM work that evaluates the performance of general-purpose linear microarchitectures [7]. algebra libraries on the RISC-V architecture. Furthermore, it TABLE I: An overview of BLAS Benchmarks extends previous work