Cache Where You Want! Reconciling Predictability and Coherent Caching

Cache Where You Want! Reconciling Predictability and Coherent Caching

Cache Where you Want! Reconciling Predictability and Coherent Caching Ayoosh Bansal ∗, Jayati Singh ∗, Yifan Hao∗, Jen-Yang Wen∗, Renato Mancuso† and Marco Caccamo‡ ∗ University of Illinois at Urbana-Champaign, {ayooshb2, jayati, yifanh5, jwen11}@illinois.edu † Boston University, [email protected] ‡ Technical University of Munich, [email protected] Abstract—Real-time and cyber-physical systems In multi-core systems, major sources of unpredictability need to interact with and respond to their physical arise from inter-core contention over shared memory re- environment in a predictable time. While multicore sources [2], [3]. Memory resource partitioning techniques platforms provide incredible computational power and throughput, they also introduce new sources of unpre- present a suitable approach to mitigate undesired tempo- dictability. Large fluctuations in latency to access data ral interference between cores [4]–[6]. However, memory shared between multiple cores is an important contrib- resource partitioning is particularly well suited only for utor to the overall execution-time variability. In ad- systems where data exchange between cores is scarce or dition to the temporal unpredictability introduced by nonexistent [7]. Heavy data processing or data pipelining caching, parallel applications with data shared across multiple cores also pay additional latency overheads workloads, on the other hand, are often internally struc- due to data coherence. tured as multi-threaded applications, where coordination Analyzing the impact of data coherence on the worst- and fast data exchange between parallel execution flows on case execution-time of real-time applications is chal- different cores is crucial. This exchange is based on shared lenging because only scarce implementation details memory. are revealed by manufacturers. This paper presents application level control for caching data at different Modern platforms generally feature a multi-level cache levels of the cache hierarchy. The rationale is that by hierarchy, with the first cache level (L1) comprised of caching data only in shared cache it is possible to bypass private per-core caches. When multiple threads access private caches. The access latency to data present in the same memory locations, it is crucial to ensure the caches becomes independent of its coherence state. We coherence of different copies of the same memory block in discuss the existing architectural support as well as the required hardware and OS modifications to support the multiple L1 caches. Dedicated hardware circuitry, namely proposed cacheability control. We evaluate the system the coherence controller, exists to maintain this invari- on an architectural simulator. We show that the worst ant. Because maintaining coherence requires coordination case execution time for a single memory write request among distributed L1 caches, it introduces overhead. is reduced by 52%. Benchmark evaluations show that Cache coherence introduces two main obstacles for real- proposed technique has a minimal impact on average performance. time systems. First, hardware coherence protocols are a Index Terms—hardware/software co-design, worst- preciously guarded intellectual property of hardware man- case execution time, cache coherence, memory con- ufacturers. As such, scarce details are available to study tention the worst-case behavior for coherent data exchange. Sec- ond, coherence controllers are not designed to optimize for worst-case behavior. One approach to achieve predictable arXiv:1909.05349v2 [cs.DC] 27 Jun 2021 I. Introduction coherence consists of re-designing coherence protocols and controllers [8], [9]. Doing so, however, requires extensive The last decade has witnessed a profound transfor- modifications to existing processor design, with a possibly mation in the way real-time systems are designed and significant impact on performance. integrated. At the root of this transformation are the In this paper, we propose a new approach to achieve ever growing data-heavy and time-sensitive real time ap- predictable coherence. The key intuition is that if memory plications. As scaling in processor speed has reached a blocks accessed by multiple applications are cached only limit, multi-core solutions [1] have proliferated. Embedded in shared levels —e.g., last-level cache— multiple copies multi-core systems have introduced a plethora of new of cache lines do not exist and coherence is trivially challenges for real-time applications. Not only this adds a satisfied. Based on this, we define a new memory type new dimension to scheduling, but remarkably the funda- that is non-cacheable in private (inner) cache levels, but mental principle that worst-case execution time (WCET) cacheable in shared (outer) caches, namely Inner-Non- of applications can be estimated in isolation has been Cacheable, Outer-Cacheable (INC-OC). INC-OC memory shaken. type coexists with traditional memory types. Control over ©2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. which type of memory should be used for different areas of an application’s working set is then provided to the developer and/or compiler. The key contributions of this M(C0) ∩ M(C7) = {DRAM} (5) work can be summarized as follows: Under these restrictions, explicit mechanisms to main- • A novel solution for predictable time access to coherent tain data coherence are not required. The restrictions data without variability induced by coherence mecha- should be expressed at application level, possibly as gran- nisms. ular as a cache line, so hard real time applications can • Prototype evaluation on an architectural simulator [10]. leverage predictable access time to shared data, while non real time applications running on the same system can II. Solution Overview enjoy the higher throughput of hardware managed cache The main idea is to allow application developers or coherence as they are not affected by the occasional spikes. compilers to use their knowledge of the application to In reality, the closest existing support is cacheability choose between the trade-offs of worst-case vs. average use control for two cache levels expressed at memory page access time with fine granularity. The choice of cacheabil- granularity in ARVv8-A ISA, as further described in Sec- ity determines which level of caches can the data be cached tion IV-B. The implementation and evaluation are based in and which levels of cache data cannot be cached. Data on an ARVv8-A processor simulator, Figure 3, and limited locations for which strong worst-case latency guarantees to two cacheability levels. are required would be selectively cached in shared levels. Data coherence is achieved as all accesses go to the III. Related Work same cached copy of such data. Coherence overheads and Multi-core systems have enabled multi-threaded real- variability are avoided. time workloads. Real-time applications with parallel, precedence-constrained processing tasks are often repre- Core 0 Core 1 Core 2 Core 3 Core 4 Core 5 Core 6 Core 7 sented as periodic (or sporadic) DAG tasks [11]. Schedul- ing of DAG tasks has received considerable attention [11]– L1 Cache 0 L1 Cache 1 L1 Cache 2 L1 Cache 3 L1 Cache 4 L1 Cache 5 L1 Cache 6 L1 Cache 7 [13]. Alongside, application-level frameworks to structure L2 Cache 0 L2 Cache 1 L2 Cache 2 L2 Cache 3 real-time applications to follow the DAG model have been proposed and evaluated [14]. L3 Cache 0 L3 Cache 1 In practice, however, the implementation and analysis of DAG real-time tasks is challenging because analytically DRAM upper-bounding the worst-case execution time (WCET) of a processing node is hard. As multiple DAG nodes Fig. 1. Generalized multi-socketed multi-cluster processor. are executed in parallel, they interfere with each other on the shared memory hierarchy. Well known sources of Consider a hypothetical system as shown in Figure 1. interference arise from space contention in the shared For each core (C) consider a set M which consists of all cache levels [15], [16]; contention for allocation of miss sta- memory blocks that are in its data pipeline. A memory tus holding registers (MSHR) [17]; bandwidth contention block can be a specific cache ($) or DRAM. For example: when accessing the DRAM memory controller [5], [18]; and bank-level contention due to request re-ordering in the DRAM storage [6]. Our work focuses on tightening and M(C0) = {L1$0,L2$0,L3$0, DRAM} (1) simplifying the analysis of the WCET bound by making M(C1) = {L1$1,L2$0,L3$0, DRAM} (2) shared data accesses immune from unpredictable temporal M(C7) = {L1$7,L2$3,L3$1, DRAM} (3) effects of cache coherence controllers. Cache contention among parallel tasks is a major cause We propose that for any cache line which is assessed by of interference [16]. Mitigation approaches include selec- only a subset of cores, the cache line be stored/cached on tive caching [19], [20] and cache partitioning [4], [21]. Strict only the intersection of the sets M of these cores. Consider resource partitioning among cores is effective for indepen- a cache line accessed by Core 0 and Core 1. This cache line dent tasks. But data sharing is necessary to implement can be stored in L2 Cache 0, L3 Cache 0 and DRAM but processing pipelines, such as real-time tasks following the not in L1 Cache 0 or L1 Cache 1. sporadic DAG model. In [22], the authors acknowledge that data-sharing between tasks is inevitable in mixed crit- M(C0) ∩ M(C1) = {L2$0,L3$0, DRAM} (4) icality, multi-core systems. In light of these considerations, previously proposed solutions have been adapted to allow Consider another cache line that is accessed from Core 0 sharing [23], [24].

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    13 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us