A PRET Microarchitecture Implementation with Repeatable Timing and Competitive Performance

A PRET Microarchitecture Implementation with Repeatable Timing and Competitive Performance

A PRET Microarchitecture Implementation with Repeatable Timing and Competitive Performance Isaac Liu1 Jan Reineke2 David Broman1;3 Michael Zimmer1 Edward A. Lee1 [email protected], [email protected], {broman,mzimmer,eal}@eecs.berkeley.edu 1University of California, Berkeley, CA, USA 2Saarland University, Germany 3Linköping University, Sweden Abstract—We contend that repeatability of execution times is Repeatable timing is easy to achieve if you are prepared crucial to the validity of testing of real-time systems. However, to sacrifice performance. The engineering challenge is to en- computer architecture designs fail to deliver repeatable timing, able both timing repeatability and performance. Conventional a consequence of aggressive techniques that improve average- case performance. This paper introduces the Precision-Timed architectures with caches and pipelines improve average- ARM (PTARM), a precision-timed (PRET) microarchitecture case performance, but make execution time inherently non- implementation that exhibits repeatable execution times without repeatable. sacrificing performance. The PTARM employs a repeatable In previous work [3], [4] we outline ideas for achieving thread-interleaved pipeline with an exposed memory hierarchy, repeatable timing by using a thread-interleaved pipeline, re- including a repeatable DRAM controller. Our benchmarks show an improved throughput compared to a single-threaded in-order placing caches with programmable scratchpad memories, and five-stage pipeline, given sufficient parallelism in the software. designing a DRAM controller with predictable timing. In this paper we present and evaluate a concrete implementation of this approach for achieving repeatable timing with competitive performance. I. INTRODUCTION Repeatability of timing can be viewed at different levels Can we trust that a processor repeatedly performs correct of granularity. At a coarse-grained level, repeatability means computations? The answer to this question depends on what that for a given input, a program always yields the same we mean by correct computations. In the core abstraction of execution time. At the most fine-grained level, each processor computation, rooted in von Neumann, Turing, and Church, instruction always takes the same amount of time. Excessively correctness refers only to correct transformation of data. fine-grained constraints on the execution time lead to design Execution time is irrelevant to correctness; it is instead a decisions that sacrifice performance. On the other hand, an performance factor, a quality metric where faster is better, not excessively coarse-grained solution may make program frag- more correct. However, in cyber-physical systems, which com- ments non-repeatable due to context dependencies. In this bine computation (embedded systems), networks, and physical work we confront this tradeoff, and show that a microarchi- processes, execution time is often a correctness criterion [1]. tecture with repeatable timing and competitive performance Although extensive research has been invested in techniques is feasible. This enables designs that are assured of the same for formal verification, testing is in practice the dominating temporal behavior in the field as exhibited on the test bench. To method to gain confidence that a system is working correctly be specific, in this paper we make the following contributions: according to some specification. However, without repeatabil- • To enable repeatability and avoid pipeline hazards, we ity, testing proves little. Although vital in embedded and real- design and implement a thread-interleaved pipeline for a time systems, repeatable timing is also valuable in general- subset of the ARMv4 ISA. The architecture is realized as purpose systems. For concurrent programs, in particular, non- a soft core on a Xilinx Virtex 5 FPGA (Section III-A). repeatability is an obstacle to reliability [2]. • We extend previous work on predictable DRAM con- trollers [5] to make DRAM memory accesses repeatable. ACCEPTED VERSION. In Proceedings of the 30th IEEE International We discuss the tradeoff between fine-grained timing re- Conference on Computer Design (ICCD 2012), Montral, Quebec, Canada, peatability and average-case performance (Section III-C). 2012, IEEE. Published version: http://dx.doi.org/10.1109/ICCD.2012.6378622 c 2012 IEEE. Personal use of this material is permitted. Permission from • We evaluate the performance of the architecture by com- IEEE must be obtained for all other uses, in any current or future media, paring with both a conventional ARMv2a and an ARMv4 including reprinting/republishing this material for advertising or promotional architecture. With the assumption of parallelizable tasks purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. or independent tasks, we show that the thread-interleaved This work was supported in part by the Center for Hybrid and Embedded pipeline achieves significant performance gains (Sec- Software Systems (CHESS) at UC Berkeley, which receives support from tion IV). the National Science Foundation (NSF awards #0720882 (CSR-EHS: PRET), #0931843 (ActionWebs), and #1035672 (CSR-CPS Ptides)), the U. S. Army Research Laboratory (ARL #W911NF-11-2-0038), the Air Force Research II. RELATED WORK Lab (AFRL), the Multiscale Systems Center (MuSyC), one of six research centers funded under the Focus Center Research Program, a Semiconductor There is a considerable body of recent work [6], [7], [8] on Research Corporation program, and the following companies: Bosch, National Instruments, Thales, and Toyota. The third author was funded by the Swedish the related problem of building timing predictable computer Research Council #623-2011-955. architectures. Timing predictability is concerned with the ability to statically compute safe and precise upper bounds on III. PRECISION-TIMED ARM execution times of programs. In contrast, timing repeatability PTARM is a concrete implementation of a precision-timed is concerned with the ability to repeat timing. A system may (PRET) machine [4], a computer architecture designed for be predictable, yet non-repeatable, e.g., due to pipelining or predictable and repeatable performance. PTARM implements caching. Similarly, a system’s timing may be repeatable, yet a subset of the ARMv4 ISA [19] and does not support hard to predict statically. The PTARM architecture presented thumb mode, an extension that compacts the instructions to 16 in this paper is both timing repeatable as well as timing bits, instead of the typical 32 bits. Conventional architectures predictable. use complex pipelines and speculation techniques to improve performance. This leads to non-repeatable behaviors because Several architectures have been proposed that exhibit re- the instruction execution is affected by the implicit state of the peatable timing. Whitham [9] and Schoeberl [10] both use hardware. PTARM improves performance through predictable microcode implementations of architectures to achieve re- and repeatable hardware techniques. These include a refined peatable timing. Whitham introduces the Microprogrammed thread-interleaved pipeline, an exposed memory hierarchy, and Coarse Grained Reconfigurable Processor (MCGREP) [9], a repeatable DRAM memory controller. In this section we which is a reconfigurable predictable architecture. MCGREP give an overview of the PTARM architecture, and discuss how uses microcode to implement pipeline operations on a simple repeatability is achieved. two stage pipeline with multiple execution units. The pipeline stores no internal state and no caches are used, so each microcode operation takes a fixed number of cycles, unaffected A. A Thread-Interleaved Pipeline by execution history. Thread-interleaved pipelines fully exploit thread-level par- allelism (TLP) by using a fine-grained thread switching policy; Schoeberl presents the Java Optimized Processor (JOP) [10]. every cycle a different hardware thread is fetched for execu- JOP uses a two-level stack cache architecture to implement the tion. PTARM implements an in-order, single-issue five-stage stack based architecture of JavaVM. It implements a three- pipeline, similar to a conventional five-stage RISC pipeline. stage pipeline and uses two registers to store the top two State is maintained for each thread in the pipeline to implement entries of the stack; the remaining stack is stored in the multithreading. A round-robin thread scheduling policy is used SRAM. No branch predictor is used, as only a small branch to reduce the context-switch overhead to zero and to maintain delay penalty is incurred. All bytecode on JOP is translated repeatable timing for all hardware threads. into fixed-length microcode, and each microcode executes By interleaving enough threads, explicit dependencies be- in a fixed number of cycles, independent of its surrounding tween instructions within the pipeline can be completely instruction. removed in thread-interleaved pipelines. Explicit dependencies Andalam et al. [11] propose the Auckland Reactive PRET are dependencies that arise from the flow of data at the (ARPRET), designed to execute compiled PRET-C programs. instruction level, such as data dependencies that depend on PRET-C is a C extension (via macros), with certain restric- the register values, or control dependencies that depend on a tions, supporting synchronous concurrency and high-level con- branch address. In general, by interleaving

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us