
Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads Asif Ali Khan Norman A. Rink Technische Universität Dresden Technische Universität Dresden Germany Germany [email protected] [email protected] Fazal Hameed Jeronimo Castrillon Technische Universität Dresden Technische Universität Dresden Germany Germany [email protected] [email protected] Abstract ACM Reference Format: Tensor contraction is a fundamental operation in many algo- Asif Ali Khan, Norman A. Rink, Fazal Hameed, and Jeronimo Castril- rithms with a plethora of applications ranging from quan- lon. 2019. Optimizing Tensor Contractions for Embedded Devices with Racetrack Memory Scratch-Pads. In Proceedings of the 20th tum chemistry over fluid dynamics and image processing ACM SIGPLAN/SIGBED Conference on Languages, Compilers, and to machine learning. The performance of tensor computa- Tools for Embedded Systems (LCTES ’19), June 23, 2019, Phoenix, AZ, tions critically depends on the efficient utilization of on-chip USA. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/ memories. In the context of low-power embedded devices, 3316482.3326351 efficient management of the memory space becomes even more crucial, in order to meet energy constraints. This work 1 Introduction aims at investigating strategies for performance- and energy- Tensors are multi-dimensional data structures that general- efficient tensor contractions on embedded systems, using ize matrices. Consequently, tensor contraction generalizes racetrack memory (RTM)-based scratch-pad memory (SPM). the operation of matrix multiplication. The abstractions of- Compiler optimizations such as the loop access order and fered by tensors and their operations are central to many data layout transformations paired with architectural opti- algorithms in modern application domains such as signal and mizations such as prefetching and preshifting are employed media processing, computer vision, and machine learning. to reduce the shifting overhead in RTMs. Experimental re- Recent years have seen a surge in the emergence of new pro- sults demonstrate that the proposed optimizations improve gramming languages and frameworks specifically designed the SPM performance and energy consumption by 24% and for the handling of tensor-based computations in these ap- 74% respectively compared to an iso-capacity SRAM. plication domains [1, 6, 26, 51], also targeting heterogeneous CCS Concepts • Computer systems organization → platforms, e.g. [8, 19, 25]. In the age of the Internet of Things, Embedded systems; Tensor contractions; Energy consump- media processing, computer vision and machine learning tion; • Compilers → Data transformation; Layout transfor- are key application domains for embedded devices, which mation; • Racetrack memory → Shifts minimization. enable ubiquitous computing in environments that call for ex- tremely low energy footprint and tiny form factors. Examples Keywords Compiler optimization, data transformation, ten- of such environments are wearables and autonomous vehi- sors, tensor contraction, matrix multiplication, racetrack cles or aircraft, where tensor processing on the device allows memory, preshifting, prefetching, embedded systems for efficient inference in intelligent applications, cf. Figure1. The typical constraints on size, power and energy con- Permission to make digital or hard copies of all or part of this work for sumption in the embedded domain make the design of sys- personal or classroom use is granted without fee provided that copies tems for processing large multi-dimensional tensors espe- are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights cially challenging. Particular pressure is put on the design for components of this work owned by others than the author(s) must of the memory subsystem, which must accommodate large be honored. Abstracting with credit is permitted. To copy otherwise, or tensorial data structures within the given constraints. This republish, to post on servers or to redistribute to lists, requires prior specific pushes traditional approaches and technologies to their lim- permission and/or a fee. Request permissions from [email protected]. its. For example, as was already observed in the mid-2000s, LCTES ’19, June 23, 2019, Phoenix, AZ, USA traditional SRAM-based memory is power hungry and suf- © 2019 Copyright held by the owner/author(s). Publication rights licensed to ACM. fers from severe leakage power consumption that is respon- ACM ISBN 978-1-4503-6724-0/19/06...$15.00 sible for up to 33.7% of the total memory energy consump- https://doi.org/10.1145/3316482.3326351 tion [20, 21]. 5 LCTES ’19, June 23, 2019, Phoenix, AZ, USA Asif Ali Khan, Norman A. Rink, Fazal Hameed, and Jeronimo Castrillon NLP AI, ML The rest of the paper is organised as follows. Section2 gives a brief overview of the RTM technology, the SPM layout Smart Smart health and the tensor contraction operation. Section3 discusses how transport various data layouts impact the overall shifting overhead Infotainment and presents the best data layout for tensor contraction. system Section4 provides a qualitative and quantitative comparison Aerospace of both the naive and the proposed data layouts with SRAM. Virtual 3D Section5 discusses the state of the art and Section6 concludes i/p Control Robotics HCI the paper. systems o/p Figure 1. Applications domains for embedded systems in 2 Background the Internet of Things. This section briefly explains the working principle and ar- chitecture of racetrack memories. In addition, it provides A radically new approach to the design of on-chip mem- background on the tensor contraction operation, layout of ories and the memory hierarchy is offered by non-volatile scratch-pad memories and their placement in embedded sys- memories (NVM). One particularly promising NVM technol- tems. ogy is the spin-orbitronics-based racetrack memory (RTM), which is more reliable and has lower read/write latency than 2.1 Racetrack Memory alternative NVM technologies [43, 44]. Moreover, RTM is Racetrack memories have evolved significantly over the last very energy-efficient and has ultra-high capacity, which is decade. Since their conception in 2008, RTMs have made why it is particularly interesting for deployment in embed- fundamental breakthroughs in device physics. In RTM ver- ded devices that process large tensors. sion 4.0, several major impediments have been eliminated In this paper we propose and analyze data layouts and and improvements in device speed and resilience have been architecture support for optimizing the important tensor demonstrated [44]. contraction operation for RTM-based scratch-pad memory Unlike in conventional memories, a single cell in RTM is a (SPM). Unlike conventional memories, a single memory cell magnetic nano-wire (track) that can have up to 100 magnetic in RTM stores data in a tape-like magnetic nanowire called domains where each domain represents a bit. Domains in a track. Each track is equipped with a read/write port, and nano-wire are separated by magnetic domain walls (DWs). accessing data on a track requires shifting and aligning it The track can be placed vertically (3D) or horizontally (2D) to the port position. If the programmer or compiler does on the surface of a silicon wafer as shown in Figure2. While not manage data layout judiciously, additional shifts become the vertical placement of tracks achieves the storage den- necessary. The data layout we propose in this paper asymp- sity of today's magnetic disk drives, it faces several design totically halves the number of shifts required for tensor con- challenges. In the horizontal configuration, the cell size can tractions. As our analysis shows, this halving of the number be much smaller than the smallest memory cell today. With of shifts is in fact necessary to give RTM a competitive edge state-of-the-art materials, the RTM cell size can be 1.5 F2 com- over SRAM-based SPM. pared to 120–200 F2 in SRAM and 4–8 F2 in DRAM [37, 52]. Specifically, this paper makes the following contributions. 1. For tensors that fit entirely into the SPM, we derive a Ish Ish Vertical racetrack Vertical data layout that reduces the number of shifts necessary for a tensor contraction to the absolute minimum. I Ish 2. We discuss how contractions of large tensors are han- sh Domain wall dled by processing tiles of the tensors in SPM. We show Access port how, in the presence of tiling, the number of shifts can Horizontal racetrack also be reduced to the bare minimum by switching the data layout when brining new tiles into the SPM. Figure 2. RTM horizontal and vertical placement 3. Our simulations show that the proposed data layout for tensors in the SPM, paired with suitable architec- The access latency of RTMs depends on how quickly ture support, is required to outperform SRAM in terms DWs inside a wire can be moved when a shift current is of latency. This also reduces the SPM energy consump- applied. In the RTM 1.0, the maximum DW velocity reported tion by 74%. was 100 m s−1 [43]. With the development of new structures We also discuss how languages and compilers can support where a magnetic film is grown on top of a heavy metal, the the generation of efficient code and suitable data layouts for velocity of DW increased to up to 300 m s−1 [36]. However, tensor contractions with RTM-based SPM. a major drawback of these designs is that the magnetic film 6 Tensor Contractions for Embedded Devices with Racetrack Memory LCTES ’19, June 23, 2019, Phoenix, AZ, USA is very sensitive to external magnetic fields. They also ex- fashion across the w tracks of a DBC and that the tracks hibit fringing fields, restricting closer packing of DWs in the in DBC can be moved together in a lock-step fashion. For nano-wire. RTM 4.0 eliminates these impediments by adding this work, we consider w equals 32 and n to be 64.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-