
Register Pointer Architecture for Efficient Embedded Processors JongSoo Park, Sung-Boem Park, James D. Balfour, David Black-Schaffer Christos Kozyrakis and William J. Dally Computer Systems Laboratory Stanford University {jongsoo, sbpark84, jbalfour, davidbbs, kozyraki, dally}@stanford.edu Abstract instruction word length. Hence, large register files can be used without sacrificing code density. Conventional register file architectures cannot optimally • Register Naming Flexibility: By dynamically modify- exploit temporal locality in data references due to their lim- ing register pointers, a small set of instructions can flexi- ited capacity and static encoding of register addresses in bly access data allocated in the register file in a way that instructions. In conventional embedded architectures, the maximizes data reuse. register file capacity cannot be increased without resorting We introduce extensions to the ARM instruction set to to longer instruction words. Similarly, loop unrolling is of- implement the RPA. In addition to the conventional register ten required to exploit locality in the register file accesses file, the modified architecture includes a dereferencible reg- across iterations because naming registers statically is in- ister file (DRF). Existing arithmetic and load/store instruc- flexible. Both optimizations lead to significant code size in- tions can use a register pointer (RP) in any register operand creases, which is undesirable in embedded systems. position to access the contents of the DRF. We define ef- In this paper, we introduce the Register Pointer Architec- ficient update policies for RPs to support common access ture (RPA), which allows registers to be accessed indirectly patterns with minimal runtime overhead. At the microar- through register pointers. Indirection allows a larger regis- chitecture level, we describe the interlocks and forwarding ter file to be used without increasing the length of instruc- paths needed to minimize read-after-write hazards on RPs. tion words. Additional register file capacity allows many We execute a set of embedded applications on a model of loads and stores, such as those introduced by spill code, to the modified processor to demonstrate that the RPA leads to be eliminated, which improves performance and reduces en- a speedup of up to 2.8× and energy savings of up to 68%. ergy consumption. Moreover, indirection affords additional We compare the RPA to alternative techniques that pro- flexibility in naming registers, which reduces the need to vide register indexing flexibility or software controlled stor- apply loop unrolling in order to maximize reuse of register age near to the processor. Loop unrolling can be used to fol- allocated variables. low arbitrary data patterns within the register file. We show that RPA leads to similar flexibility in register file accesses 1 Introduction without the code size increases introduced by longer regis- ter names and replicated loop bodies. A software-controlled Embedded system designers must optimize three efficiency scratchpad memory could be used to capture temporal lo- metrics: performance, energy consumption, and static code cality in embedded applications. Nevertheless, a scratchpad size. The processor register file helps improve the first two memory suffers from requiring explicit load and store in- metrics. By storing frequently accessed data close to the structions to make data available to arithmetic instructions. functional units, the register file reduces the time and energy In summary, the major contributions of this paper are: we required to access data from caches or main memory. How- introduce the RPA architecture, which supports indirect reg- ever, conventional register file architectures cannot fully ex- ister file access through register pointers; we explore design ploit temporal locality because of their limited capacity and options for RPA at the instruction set and microarchitecture lack of support for indirection. level, including parameters such as the number of additional The Register Pointer Architecture (RPA) supports large registers; and, we compare an embedded processor imple- register files to reduce the time and energy expended ac- menting the RPA to a conventional organization. We also cesses data caches without increasing code size. The main compare to techniques such as loop unrolling and scratch- idea of the RPA is to allow instructions to access registers pad memory. indirectly through register pointers. This provides two key The remainder of this paper is organized as follows. Sec- benefits: tion 2 provides a detailed description of RPA including in- • Large Register File Capacity: Indirect register access struction set and microarchitectural considerations. Sec- relaxes the correlation between register file capacity and tions 3 and 6 describes the experimental methodology and 1 results. Sections 5 and 6 present related work and conclu- pertinent resources introduce interesting tradeoffs amongst sions. performance, software flexibility, energy consumption, and 2 Register Pointer Architecture hardware complexity, as described below. This section describes the RPA architecture at the instruc- Number of DRF Entries: The relationship between the tion set and microarchitecture levels. reduction of cache accesses and the number of DRF entries varies by application. For applications such as 2.1 Instruction Set Architecture 1DFIR, cache access reduction is a linear function of the The RPA extends a base instruction set, such as ARM, to DRF size, while for matrix multiplication, it follows a support indirect register file accesses. An instruction indi- square root function. For applications with table lookups, rectly accesses a register by identifying in its encoding a it is a step function: it saturates once the lookup table fits register pointer, whose contents provide the address for the in the DRF. actual register file reference. While implementations of ex- There are two costs associated with larger DRF sizes. isting ISAs may use indirect register accesses to implement First, the energy consumption of a DRF access increases techniques such as register renaming [1], the RPA exposes with the number of DRF entries. Second, the DRF ac- the indirection to the software. cess time increases because the larger row decoders in- In addition to the conventional register file (RF), the cur longer delays while the discharge time of the longer RPA defines a dereferencible register file (DRF) and regis- bit-lines increases. The additional access latency should ter pointers (RPs). The DRF contains all registers accessed not adversely impact program execution times unless it indirectly, while the RPs contain indirection information requires increasing the clock cycle time. Given a small (DRF address and other configuration fields). Data process- DRF (fewer than 256 entries), we expect the clock cycle ing and transfer instructions can specify any of the follow- time will be limited by the cache access latency rather ing as an input or output operand: an RF entry, a DRF entry than the DRF. through an RP, or an RP. Note that only register operands are modified by the RPA; no additional opcodes or memory In Section 4, we examine how these factors influence addressing modes are added to the instruction set. the DRF size which best balances performance improve- The example instruction shown below illustrates impor- ments with energy consumption. tant features of the RPA ISA. The instruction adds r0, a RF Number of DRF Ports: We can limit the number of DRF entry, to the DRF entry pointed by the RP p0. The sum is ports to one read port and one write port, which reduces stored in the RP p1. The postfix operator “!” increments RP the DRF area and energy consumption. However, the re- p0 after it is dereferenced. duced number of DRF ports penalizes the performance of applications which have multiple independent data add p1, r[p0]!, r0 streams, such as 1DFIR. Assembly instructions access the DRF by specifying an RP Number and Binding of Register Pointers: The simplest enclosed in square brackets after the symbol r. An RP is design, in terms of hardware complexity, would provide accessed as though it was a conventional general purpose one dedicated RP for each operand position. Using the register: by directly specifying the RP as an operand. ARM ISA as an example, we would have pd, pn and pm, To reduce the overhead of RP updates, a register pointer which correspond to the d, n and m operands, respectively. may be incremented when named in an instruction. For fur- However, such a scheme lacks flexibility and may intro- ther flexibility, we support circular addressing using two duce overheads when the same stream is used as an in- additional fields in each RP: a base address (begin), and put and an output or in different input operand positions. a limit address (end). An attempt to increment the address Providing more RPs with flexible bindings tends to in- beyond the end address causes the pointer to wrap around to crease the encoded instruction width and the complexity the begin address. An overflow bit is set when the pointer of the interlocks required in pipelined processor imple- wraps around to allow the software to detect the wrapping mentations. around if desired. Each RP stores the base address and limit address field in its higher bits. When an instruction derefer- We evaluate these parameters quantitatively in Sec- ences an RP, it accesses only the least significant bits, whose tion 4.1, focusing primarily on the number of DRF entries. contents are the actual address for DRF accesses. 2.3 RPA Processor Organization The number of bits required to encode the RP and DRF addresses depends on the number of RPs, the binding of Figure 1 shows a five-stage processor pipeline modified to RPs, and the number of access modes. Note that the number implement the RPA. The main additions are the DRF and of DRF entries does not influence the encoding of instruc- RPs, which, as in a conventional scalar pipeline, are read in tions. The specific modification on ARM ISA is described the decode stage. The post-increment of the RPs is also per- in Section 3.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-