
CHAPTER 13 DRAM Memory Controller In modern computer systems, processors and I/O workload characteristics, and different design goals, devices access data in the memory system through the design space of a DRAM memory controller for the use of one or more memory controllers. Memory a given DRAM device has nearly as much freedom in controllers manage the movement of data into and the design space as the design space of a processor out of DRAM devices while ensuring protocol compli- that implements a specifi c instruction-set architec- ance, accounting for DRAM-device-specifi c electrical ture. In that sense, just as an instruction-set architec- characteristics, timing characteristics, and, depend- ture defi nes the programming model of a processor, ing on the specifi c system, even error detection and a DRAM-access protocol defi nes the interface pro- correction. DRAM memory controllers are often con- tocol between a DRAM memory controller and the tained as part of the system controller, and the design system of DRAM devices. In both cases, actual per- of an optimal memory controller must consist of sys- formance characteristics depend on the specifi c tem-level considerations that ensure fairness in arbi- microarchitectural implementations rather than the tration for access between different agents that read superfi cial description of a programming model or and store data in the same memory system. interface protocol. That is, just as two processors that The design and implementation of the DRAM support the same instruction-set architecture can memory controllers determine the access latency and have dramatically different performance characteris- bandwidth effi ciency characteristics of the DRAM tics depending on the respective microarchitectural memory system. The previous chapters provide a implementations, two DRAM memory controllers bottom-up approach to the design and implemen- that support the same DRAM-access protocol can tation of a DRAM memory system. With the under- have dramatically different latency and sustainable standing of DRAM device operations and system bandwidth characteristics depending on the respec- level provided by the previous chapters, this chapter tive microarchitectural implementations. DRAM proceeds to examine DRAM controller design and memory controllers can be designed to minimize implementation considerations. die size, minimize power consumption, maximize system performance, or simply reach a reasonably optimal compromise of the confl icting design goals. Specifi cally, the Row-Buffer-Management Policy, the 13.1 DRAM Controller Architecture Address Mapping Scheme, and the Memory Transac- The function of a DRAM memory controller is tion and DRAM Command Ordering Scheme are par- to manage the fl ow of data into and out of DRAM ticularly important to the design and implementation devices connected to that DRAM controller in the of DRAM memory controllers. memory system. However, due to the complexity of Due to the increasing disparity in the operating fre- DRAM memory-access protocols, the large num- quency of modern processors and the access latency bers of timing parameters, the innumerable combi- to main memory, there is a large body of active and nations of memory system organizations, different ongoing research in the architectural community 497 498 Memory Systems: Cache, DRAM, Disk devoted to the performance optimization of the transaction scheduling and command scheduling DRAM memory controller. Specifi cally, the Address policies and examining them in a collective context Mapping Scheme, designed to minimize bank address rather than separate optimizations. For example, a confl icts, has been studied by Lin et al. [2001] and low-priority request from an I/O device to an already Zhang et al. [2002a]. DRAM Command and Memory open bank may be scheduled ahead of a high- priority Transaction Ordering Schemes have been studied by request from a microprocessor to a different row of Briggs et al. [2002], Cuppu et al. [1999], Hur and Lin the same open bank, depending on the access history, [2004], McKee et al. [1996a], and Rixner et al. [2000]. respective priority, and state of the memory system. Due to the sheer volume of research into optimal Consequently, a discussion on transaction arbitration DRAM controller designs for different types of DRAM is included in this chapter. memory systems and workload characteristics, this Figure 13.1 also illustrates that once a transaction chapter is not intended as a comprehensive sum- wins arbitration and enters into the memory con- mary of all prior work. Rather, the text in this chapter troller, it is mapped to a memory address location describes the basic concepts of DRAM memory con- and converted to a sequence of DRAM commands. troller design in abstraction, and relevant research on The sequence of commands is placed in queues that specifi c topics is referenced as needed. exist in the memory controller. The queues may be Figure 13.1 illustrates some basic components of arranged as a generic queue pool, where the control- an abstract DRAM memory controller. The memory ler will select from pending commands to execute, or controller accepts requests from one or more micropro- the queues may be arranged so that there is one queue cessors and one or more I/O devices and provides the per bank or per rank of memory. Then, depending on arbitration interface to determine which request agent the DRAM command scheduling policy, commands will be able to place its request into the memory control- are scheduled to the DRAM devices through the elec- ler. From a certain perspective, the request arbitration trical signaling interface. logic may be considered as part of the system control- In the following sections, the various components ler rather than the memory controller. However, as the of the memory controller illustrated in Figure 13.1 cost of memory access continues to increase relative are separately examined, with the exception of the to the cost of data computation in modern processors, electrical signaling interface. Although the electrical efforts in performance optimizations are combining signaling interface may be one of the most critical DRAM memory controller queue bank DRAM cpu pool management Bank 0 cpu DRAM arbiter Bank 1 I/O request DRAM streams interface Bank 2 signaling DRAM DIMM transaction address command electrical DRAM scheduling translation scheduling signalling access FIGURE 13.1: Illustration of an abstract DRAM memory controller. Chapter 13 DRAM MEMORY CONTROLLER 499 components in modern, high data rate memory be accessed again with the minimal latency of tCAS. In systems, the challenges of signaling are examined sep- the case where another memory read access is made arately in Chapter 9. Consequently, the focus in this to the same row, that memory access can occur with chapter is limited to the digital logic components of minimal latency since the row is already active in the the DRAM memory controller. sense amplifi er and only a column access command is needed to move the data from the sense amplifi ers to the memory controller. However, in the case where the access is to a different row of the same bank, the 13.2 Row-Buffer-Management Policy memory controller must fi rst precharge the DRAM In modern DRAM devices, the arrays of sense array, engage another row activation, and then per- amplifi ers can also act as buffers that provide tempo- form the column access. rary data storage. In this chapter, policies that man- age the operation of sense amplifi ers are referred to as row-buffer-management policies. The two primary 13.2.2 Close-Page Row-Buffer-Management row-buffer-management policies are the open-page Policy policy and the close-page policy, and depending on In contrast to the open-page row-buffer-manage- the system, different row-buffer-management poli- ment policy, the close-page row-buffer-management cies can be used to optimize performance or minimize policy is designed to favor accesses to random loca- power consumption of the DRAM memory system. tions in memory and optimally supports memory request patterns with low degrees of access locality. The open-page policy and closely related variant 13.2.1 Open-Page Row-Buffer-Management policies are typically deployed in memory systems Policy designed for low processor count, general-purpose In commodity DRAM devices, data access to and computers. In contrast, the close-page policy is typi- from the DRAM storage cells is a two-step process that cally deployed in memory systems designed for large requires separate row activation commands and col- processor count, multiprocessor systems or specialty umn access commands.1 In cases where the memory- embedded systems. The reason that an open-page access sequence possesses a high degree of temporal policy is typically deployed in memory systems of low and spatial locality, memory system architects and processor count platforms while a close-page policy design engineers can take advantage of the locality is typically deployed in memory systems of larger by directing temporally and spatially adjacent mem- processor count platforms is that in large systems, ory accesses to the same row of memory. The open- the intermixing of memory request sequences from page row-buffer-management policy is designed to multiple, concurrent, threaded contexts reduces the favor memory accesses to the same row of memory locality of the resulting memory-access sequence. by keeping sense amplifi ers open and holding a row Consequently, the probability of row hit decreases of data for ready access. In a DRAM controller that and the probability of bank confl ict increases in these implements the open-page policy, once a row of data systems, reaching a tipping point of sorts where a is brought to the array of sense amplifi ers in a bank of close-page policy provides better performance for DRAM cells, different columns of the same row can the computer system.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages22 Page
-
File Size-