
The Colored Refresh Server for DRAM Xing Pan, Frank Mueller North Carolina State University, Raleigh, USA, [email protected] Abstract—Bounding each task’s worst-case execution time tRF C, exceeds 1 micro-second at 32 Gb DRAM size, and (WCET) accurately is essential for real-time systems to determine the loss in DRAM throughput caused by refreshes reaches if all deadlines can be met. Yet, access latencies to Dynamic nearly 50% at 64 Gb [4]. Some work focuses on reducing Random Access Memory (DRAM) vary significantly due to DRAM refresh, which blocks access to memory cells. Variations DRAM refresh latencies from both hardware and software further increase as DRAM density grows. angles. Although the DRAM refresh impact can be reduced This work contributes the “Colored Refresh Server” (CRS), a by some proposed hardware solutions [8], [9], [10], [11], such uniprocessor scheduling paradigm that partitions DRAM in two solutions take a long time before they become widely adopted. distinctly colored groups such that refreshes of one color occur Hence, other works seek to assess the viability of software in parallel to the execution of real-time tasks of the other color. By executing tasks in phase with periodic DRAM refreshes with solutions by lowering refresh overhead via exploiting inter- opposing colors, memory requests no longer suffer from refresh cell variation in retention time [4], [12], reducing unnecessary interference. Experimental results confirm that refresh overhead refreshes [13], [14], and decreasing the probability of a mem- is completely hidden and memory throughput enhanced. ory access interfering with a refresh [15], [7]. Fine Granularity Refresh (FGR), proposed by JeDEC’s DDR4 specification, I. INTRODUCTION reduces refresh delays by trading off refresh latency against Dynamic Random Access Memory (DRAM) has been the frequency [2]. Such software approaches either heavily rely memory of choice in embedded systems for many years due on specific data access patterns of workloads or have high low cost combined with large capacity, albeit at the expense implementation overhead. More significantly, none of them of volatility. As specified by the DRAM standards [1], [2], can hide refresh overhead. each DRAM cell must be refreshed periodically within a For real-time systems, the refresh problem is even more sig- given refresh interval. The refresh commands are issued by nificant. Bounding the worst-case execution time (WCET) of a the DRAM controller via the command bus. This mode, called task’s code is key to assuring correctness under schedulability auto-refresh, recharges all memory cells within the “retention analysis, and only static timing analysis methods can provide time”, which is typically 64ms for commodity DRAMs under ◦ safe bounds on the WCET [16]. Due to the asynchronous 85 C [1], [2]. While DRAM is being refreshed, a memory nature of refreshes relative to task schedules and preemptions, space (i.e., a DRAM rank) becomes unavailable to memory none of the current analysis techniques tightly bound the effect requests so that any such memory reference blocks the CPU of DRAM refreshes as a blocking term on response time. pipeline until the refresh completes. Furthermore, a DRAM Atanassov and Puschner [17] discuss the impact of DRAM refresh command closes a previously open row and opens a refresh on the execution time of real-time tasks and calculate new row subject to refresh [3], even though data of the old the maximum possible increase of execution time due to row may be reused (referenced) before and after the refresh. refreshes. However, this bound is too pessimistic (loose): If the Hence, the delay suffered by the processor due to DRAM WCET or the blocking term were augmented by the maximum refresh includes two aspects: (1) the cost (blocking) of the possible refresh delay, many schedules would become theoreti- refresh operation itself, and (2) reloads of the row buffer for cally infeasible, even though executions may meet deadlines in data displaced by refreshes. As a result, the response time of practice. Furthermore, as the refresh overhead almost increases a DRAM access depends on its point in time during execution approximately linearly with growing DRAM density, it quickly relative to DRAM refresh operations. becomes untenable to augment the WCET or blocking term Prior work indicated that system performance is signif- by ever increasing refresh delays for future high density icantly degraded by refresh overhead [4], [5], [6], [7], a DRAM. Although Bhat et al. make refreshes predictable and problem that is becoming more prevalent as DRAMs are reduce preemption due to refreshes by triggering them in increasing in density. With growing density, more DRAM software instead of hardware auto-refresh [3], the cost of cells are required per chip, which must be refreshed within refresh operations is only considered, but cannot be hidden. the same retention time, i.e., more rows need to be refreshed Also, a task cannot be scheduled under Bhat if its period is within the same refresh interval. This increases the cost of a less than the execution time of a burst refresh. refresh operation and thus reduces memory throughput. Even This work contributes the “Colored Refresh Server” (CRS) with conservative estimates of DRAM growth in density for to remove task preemptions due to refreshes and to hide future DRAM technology, the cost of one refresh operation, DRAM refresh overhead. As a result, CRS makes real-time This work was supported in part by NSF grants 1239246,1329780,1525609 systems more predictable, particularly for high DRAM density. and 1813004. CRS exploits colored memory allocation to partition the entire memory space into two colors corresponding to two server commands and scheduled while satisfying the timing con- tasks (simply called servers from here on) on a uniprocessor. straints of DRAM banks and buses. A DRAM controller is Each real-time task is assigned one color and associated with also called a node that governs DRAM memory organized the corresponding server, where the two servers have different into channels, ranks and banks (see Fig. 1). static priorities. DRAM refresh operations are triggered by two tasks, each of which issues refresh commands to the memory of its corresponding server for a subset of a colors (DRAM ranks) using a burst refresh pattern. More significantly, by appropriately grouping real-time tasks into different servers, refreshes and competing memory accesses can be strategically co-scheduled so that memory reads/writes do not suffer from refresh interference. As a result, access latencies are reduced and memory throughput increases, which tends to result in Fig. 1. DRAM System Architecture schedulability of more real-time tasks. What is more, the A DRAM bank array is organized into rows and columns of overhead of CRS is small and remains constant irrespective individual data cells (see Fig. 2). To resolve a memory access of DRAM density/size. In contrast, auto-refreshed overhead request, the row containing the requested data needs to first be keeps growing as DRAM density increases. copied from the bank array into the row buffer. As a side effect, Contributions: (1) The impact of refresh delay under varying the old row in the buffer is closed (“precharge”) incurring DRAM densities/sizes is assessed for real-time systems with a Row Precharge delay, tRP , and the new row is opened stringent timing constraints. We observe that refresh overhead (“activate”) incurring a Row Access Strobe delay, tRAS. This for an application is not easy to predict under standard auto- is called a row buffer miss. Once loaded into the row buffer refresh. Furthermore, the losses in DRAM throughput and and opened, accesses of adjacent data in a row due to spatial performance caused by refreshes quickly become unacceptable locality incur just a Column Access Strobe penalty, tCAS for real-time systems with high DRAM density. (row buffer hit), which is much faster than tRP + tRAS. (2) The Colored Refresh Server (CRS) for uniprocessors is developed to refresh DRAM via memory space coloring and shown to hide to schedule tasks via the server policy. refresh overhead almost entirely . hidden since a memory space is either being accessed or refreshed, but never both at the same time. Thus, regular memory accesses no longer suffer from refresh interference, i.e., the blocking effect of refreshes remains hidden in a safe manner. (3) Experiments with real-time tasks confirm that both refresh delays are hidden and DRAM access latencies are reduced. Consequently, application execution times become more pre- Fig. 2. DRAM Bank Architecture A. Memory Space Partitioning dictable and stable, even when DRAM density increases. An experimental comparison with DDR4’s FGR shows that CRS We assume a DRAM hierarchy with node, channel, rank, exhibits better performance and higher task predictability. and bank abstraction. To partition this memory space, we ob- (4) CRS is realized in software and can be implemented on tained a copy of TintMalloc [18], a heap allocator that “colors” commercial off-the-shelf (COTS) systems. memory pages with controller (node) and bank affinity. (5) Compared to previous work [3], CRS not only hides refresh TintMalloc allows programmers to select one (or more) overhead, but also feasibly schedules short tasks (period less colors to choose a memory controller and bank regions disjoint than execution time of burst refresh) by refactoring them as from those of other tasks. DRAM is further partitioned into “copy tasks”. channels and ranks above banks. The memory space of an (6) Our approach can be implemented with any real-time application can be chosen such that it conforms to a specific scheduling policy supported inside the CRS servers. color. E.g., a real-time task can be assigned a private memory space based on rank granularity. When this task runs, it can II.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-