Keystone Architecture DDR3 Memory Controller

Total Page:16

File Type:pdf, Size:1020Kb

Keystone Architecture DDR3 Memory Controller Keystone Architecture DDR3 Memory Controller User's Guide Literature Number: SPRUGV8E November 2010–Revised January 2015 Contents Preface....................................................................................................................................... 11 1 Introduction ....................................................................................................................... 13 1.1 Purpose of the Peripheral ................................................................................................. 14 1.2 Features ..................................................................................................................... 14 1.3 Industry Standard(s) Compliance Statement ........................................................................... 14 2 Peripheral Architecture ....................................................................................................... 16 2.1 Clock Interface.............................................................................................................. 17 2.2 SDRAM Memory Map ..................................................................................................... 17 2.3 Signal Descriptions......................................................................................................... 17 2.4 Protocol Descriptions ...................................................................................................... 18 2.4.1 Mode Register Set (MRS or EMRS) ............................................................................ 19 2.4.2 Refresh Mode...................................................................................................... 20 2.4.3 Activation ........................................................................................................... 20 2.4.4 Deactivation ........................................................................................................ 20 2.4.5 READ Command .................................................................................................. 20 2.4.6 Write (WR) Command ............................................................................................ 21 2.5 Address Mapping........................................................................................................... 22 2.6 DDR3 Memory Controller Interface ...................................................................................... 27 2.6.1 Arbitration .......................................................................................................... 28 2.6.2 Command Starvation ............................................................................................. 29 2.6.3 Possible Race Condition ......................................................................................... 29 2.6.4 Class of Service ................................................................................................... 29 2.7 Refresh Scheduling ........................................................................................................ 30 2.8 Self-Refresh Mode ......................................................................................................... 30 2.8.1 Extended Temperature Range .................................................................................. 31 2.9 Reset Considerations ...................................................................................................... 31 2.10 Turnaround Time ........................................................................................................... 32 2.11 DDR3 SDRAM Memory Initialization .................................................................................... 32 2.11.1 DDR3 Initialization Sequence .................................................................................. 34 2.12 Dual Rank Support......................................................................................................... 34 2.13 Leveling...................................................................................................................... 34 2.13.1 Full Leveling (Auto Leveling)................................................................................... 34 2.13.2 Incremental Leveling............................................................................................. 35 2.13.2.1 Ramp Incremental Leveling................................................................................ 36 2.13.3 Impact On Bandwidth............................................................................................ 36 2.13.4 Programming Full Leveling ..................................................................................... 36 2.13.4.1 Leveling Timeout ............................................................................................ 37 2.13.4.2 Read Data Eye Training Errata For Full Leveling ...................................................... 37 2.13.5 Programming Incremental Leveling............................................................................ 38 2.13.5.1 Standalone Incremental Leveling ......................................................................... 38 2.13.6 Programming Ratio Forced Leveling .......................................................................... 38 2.13.7 Using Invert Clock Out........................................................................................... 38 2.14 Interrupt Support ........................................................................................................... 39 2.15 EDMA Event Support ...................................................................................................... 39 2.16 Emulation Considerations ................................................................................................. 39 2 Contents SPRUGV8E–November 2010–Revised January 2015 Submit Documentation Feedback Copyright © 2010–2015, Texas Instruments Incorporated www.ti.com 2.17 ECC .......................................................................................................................... 39 2.18 Power Management........................................................................................................ 40 2.18.1 SDRAM Self-Refresh Mode..................................................................................... 40 2.18.2 SDRAM Power-Down Mode .................................................................................... 40 2.19 Performance Monitoring ................................................................................................... 40 3 Using the DDR3 Memory Controller ...................................................................................... 43 3.1 Connecting the DDR3 Memory Controller to DDR3 SDRAM......................................................... 44 3.2 Configuring DDR3 Memory Controller Registers to Meet DDR3 SDRAM Specifications......................... 48 3.2.1 Programming the SDRAM Configuration Register (SDCFG)................................................ 48 3.2.2 Programming the SDRAM Refresh Control Register (SDRFC) ............................................. 48 3.2.3 Configuring SDRAM Timing Registers (SDTIM1, SDTIM2, SDTIM3, SDTIM4) .......................... 49 3.2.4 Configuring Leveling Registers .................................................................................. 50 3.2.5 Configuring Read Latency ....................................................................................... 50 4 DDR3 Memory Controller Registers ...................................................................................... 52 4.1 Module ID and Revision Register (MIDR)............................................................................... 55 4.2 DDR3 Memory Controller Status Register (STATUS)................................................................. 56 4.3 SDRAM Configuration Register (SDCFG) .............................................................................. 57 4.4 SDRAM Refresh Control Register (SDRFC)............................................................................ 59 4.5 SDRAM Timing 1 (SDTIM1) Register ................................................................................... 60 4.6 SDRAM Timing 2 (SDTIM2) Register ................................................................................... 61 4.7 SDRAM Timing 3 (SDTIM3) Register ................................................................................... 62 4.8 Power Management Control Register (PMCTL) ....................................................................... 63 4.9 VBUSM Configuration Register (VBUSM_CONFIG) .................................................................. 65 4.10 Performance Counter 1 Register (PERF_CNT_1) ..................................................................... 66 4.11 Performance Counter 2 Register (PERF_CNT_2) ..................................................................... 67 4.12 Performance Counter Config Register (PERF_CNT_CFG) .......................................................... 68 4.13 Performance Counter Master Region Select Register (PERF_CNT_SEL) ......................................... 69 4.14 Performance Counter Time Register (PERF_CNT_TIM) ............................................................. 70 4.15 Interrupt Raw Status Register (IRQSTATUS_RAW_SYS) ........................................................... 71 4.16 Interrupt Status Register (IRQSTATUS_ SYS)......................................................................... 72 4.17
Recommended publications
  • 2GB DDR3 SDRAM 72Bit SO-DIMM
    Apacer Memory Product Specification 2GB DDR3 SDRAM 72bit SO-DIMM Speed Max CAS Component Number of Part Number Bandwidth Density Organization Grade Frequency Latency Composition Rank 0C 78.A2GCB.AF10C 8.5GB/sec 1066Mbps 533MHz CL7 2GB 256Mx72 256Mx8 * 9 1 Specifications z Support ECC error detection and correction z On DIMM Thermal Sensor: YES z Density:2GB z Organization – 256 word x 72 bits, 1rank z Mounting 9 pieces of 2G bits DDR3 SDRAM sealed FBGA z Package: 204-pin socket type small outline dual in line memory module (SO-DIMM) --- PCB height: 30.0mm --- Lead pitch: 0.6mm (pin) --- Lead-free (RoHS compliant) z Power supply: VDD = 1.5V + 0.075V z Eight internal banks for concurrent operation ( components) z Interface: SSTL_15 z Burst lengths (BL): 8 and 4 with Burst Chop (BC) z /CAS Latency (CL): 6,7,8,9 z /CAS Write latency (CWL): 5,6,7 z Precharge: Auto precharge option for each burst access z Refresh: Auto-refresh, self-refresh z Refresh cycles --- Average refresh period 7.8㎲ at 0℃ < TC < +85℃ 3.9㎲ at +85℃ < TC < +95℃ z Operating case temperature range --- TC = 0℃ to +95℃ z Serial presence detect (SPD) z VDDSPD = 3.0V to 3.6V Apacer Memory Product Specification Features z Double-data-rate architecture; two data transfers per clock cycle. z The high-speed data transfer is realized by the 8 bits prefetch pipelined architecture. z Bi-directional differential data strobe (DQS and /DQS) is transmitted/received with data for capturing data at the receiver. z DQS is edge-aligned with data for READs; center aligned with data for WRITEs.
    [Show full text]
  • Benchmarking the Intel FPGA SDK for Opencl Memory Interface
    The Memory Controller Wall: Benchmarking the Intel FPGA SDK for OpenCL Memory Interface Hamid Reza Zohouri*†1, Satoshi Matsuoka*‡ *Tokyo Institute of Technology, †Edgecortix Inc. Japan, ‡RIKEN Center for Computational Science (R-CCS) {zohour.h.aa@m, matsu@is}.titech.ac.jp Abstract—Supported by their high power efficiency and efficiency on Intel FPGAs with different configurations recent advancements in High Level Synthesis (HLS), FPGAs are for input/output arrays, vector size, interleaving, kernel quickly finding their way into HPC and cloud systems. Large programming model, on-chip channels, operating amounts of work have been done so far on loop and area frequency, padding, and multiple types of blocking. optimizations for different applications on FPGAs using HLS. However, a comprehensive analysis of the behavior and • We outline one performance bug in Intel’s compiler, and efficiency of the memory controller of FPGAs is missing in multiple deficiencies in the memory controller, leading literature, which becomes even more crucial when the limited to significant loss of memory performance for typical memory bandwidth of modern FPGAs compared to their GPU applications. In some of these cases, we provide work- counterparts is taken into account. In this work, we will analyze arounds to improve the memory performance. the memory interface generated by Intel FPGA SDK for OpenCL with different configurations for input/output arrays, II. METHODOLOGY vector size, interleaving, kernel programming model, on-chip channels, operating frequency, padding, and multiple types of A. Memory Benchmark Suite overlapped blocking. Our results point to multiple shortcomings For our evaluation, we develop an open-source benchmark in the memory controller of Intel FPGAs, especially with respect suite called FPGAMemBench, available at https://github.com/ to memory access alignment, that can hinder the programmer’s zohourih/FPGAMemBench.
    [Show full text]
  • Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines, External
    5. Dual-DIMM DDR2 and DDR3 SDRAM Board Design Guidelines June 2012 EMI_DG_005-4.1 EMI_DG_005-4.1 This chapter describes guidelines for implementing dual unbuffered DIMM (UDIMM) DDR2 and DDR3 SDRAM interfaces. This chapter discusses the impact on signal integrity of the data signal with the following conditions in a dual-DIMM configuration: ■ Populating just one slot versus populating both slots ■ Populating slot 1 versus slot 2 when only one DIMM is used ■ On-die termination (ODT) setting of 75 Ω versus an ODT setting of 150 Ω f For detailed information about a single-DIMM DDR2 SDRAM interface, refer to the DDR2 and DDR3 SDRAM Board Design Guidelines chapter. DDR2 SDRAM This section describes guidelines for implementing a dual slot unbuffered DDR2 SDRAM interface, operating at up to 400-MHz and 800-Mbps data rates. Figure 5–1 shows a typical DQS, DQ, and DM signal topology for a dual-DIMM interface configuration using the ODT feature of the DDR2 SDRAM components. Figure 5–1. Dual-DIMM DDR2 SDRAM Interface Configuration (1) VTT Ω RT = 54 DDR2 SDRAM DIMMs (Receiver) Board Trace FPGA Slot 1 Slot 2 (Driver) Board Trace Board Trace Note to Figure 5–1: (1) The parallel termination resistor RT = 54 Ω to VTT at the FPGA end of the line is optional for devices that support dynamic on-chip termination (OCT). © 2012 Altera Corporation. All rights reserved. ALTERA, ARRIA, CYCLONE, HARDCOPY, MAX, MEGACORE, NIOS, QUARTUS and STRATIX words and logos are trademarks of Altera Corporation and registered in the U.S. Patent and Trademark Office and in other countries.
    [Show full text]
  • A Modern Primer on Processing in Memory
    A Modern Primer on Processing in Memory Onur Mutlua,b, Saugata Ghoseb,c, Juan Gomez-Luna´ a, Rachata Ausavarungnirund SAFARI Research Group aETH Z¨urich bCarnegie Mellon University cUniversity of Illinois at Urbana-Champaign dKing Mongkut’s University of Technology North Bangkok Abstract Modern computing systems are overwhelmingly designed to move data to computation. This design choice goes directly against at least three key trends in computing that cause performance, scalability and energy bottlenecks: (1) data access is a key bottleneck as many important applications are increasingly data-intensive, and memory bandwidth and energy do not scale well, (2) energy consumption is a key limiter in almost all computing platforms, especially server and mobile systems, (3) data movement, especially off-chip to on-chip, is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today. At the same time, conventional memory technology is facing many technology scaling challenges in terms of reliability, energy, and performance. As a result, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of higher cost. The emergence of 3D-stacked memory plus logic, the adoption of error correcting codes inside the latest DRAM chips, proliferation of different main memory standards and chips, specialized for different purposes (e.g., graphics, low-power, high bandwidth, low latency), and the necessity of designing new solutions to serious reliability and security issues, such as the RowHammer phenomenon, are an evidence of this trend.
    [Show full text]
  • Advanced X86
    Advanced x86: BIOS and System Management Mode Internals Input/Output Xeno Kovah && Corey Kallenberg LegbaCore, LLC All materials are licensed under a Creative Commons “Share Alike” license. http://creativecommons.org/licenses/by-sa/3.0/ ABribuEon condiEon: You must indicate that derivave work "Is derived from John BuBerworth & Xeno Kovah’s ’Advanced Intel x86: BIOS and SMM’ class posted at hBp://opensecuritytraining.info/IntroBIOS.html” 2 Input/Output (I/O) I/O, I/O, it’s off to work we go… 2 Types of I/O 1. Memory-Mapped I/O (MMIO) 2. Port I/O (PIO) – Also called Isolated I/O or port-mapped IO (PMIO) • X86 systems employ both-types of I/O • Both methods map peripheral devices • Address space of each is accessed using instructions – typically requires Ring 0 privileges – Real-Addressing mode has no implementation of rings, so no privilege escalation needed • I/O ports can be mapped so that they appear in the I/O address space or the physical-memory address space (memory mapped I/O) or both – Example: PCI configuration space in a PCIe system – both memory-mapped and accessible via port I/O. We’ll learn about that in the next section • The I/O Controller Hub contains the registers that are located in both the I/O Address Space and the Memory-Mapped address space 4 Memory-Mapped I/O • Devices can also be mapped to the physical address space instead of (or in addition to) the I/O address space • Even though it is a hardware device on the other end of that access request, you can operate on it like it's memory: – Any of the processor’s instructions
    [Show full text]
  • Motorola Mpc107 Pci Bridge/Integrated Memory Controller
    MPC107FACT/D Fact Sheet MOTOROLA MPC107 PCI BRIDGE/INTEGRATED MEMORY CONTROLLER The MPC107 PCI Bridge/Integrated Memory Controller provides a bridge between the Peripheral Component Interconnect, (PCI) bus and PowerPC 603e™, PowerPC 740™, PowerPC 750™ or MPC7400 microprocessors. PCI support allows system designers to design systems quickly using peripherals already designed for PCI and the other standard interfaces available in the personal computer hardware environment. The MPC107 provides many of the other necessities for embedded applications including a high-performance memory controller and dual processor support, 2-channel flexible DMA controller, an interrupt controller, an I2O-ready message unit, an inter-integrated circuit controller (I2C), and low skew clock drivers. The MPC107 contains an Embedded Programmable Interrupt Controller (EPIC) featuring five hardware interrupts (IRQ’s) as well as sixteen serial interrupts along with four timers. The MPC107 uses an advanced, 2.5-volt HiP3 process technology and is fully compatible with TTL devices. Integrated Memory Controller The memory interface controls processor and PCI interactions to main memory. It supports a variety of programmable timing supporting DRAM (FPM, EDO), SDRAM, and ROM/Flash ROM configurations, up to speeds of 100 MHz. PCI Bus Support The MPC107 PCI interface is designed to connect the processor and memory buses to the PCI local bus without the need for "glue" logic at speeds up to 66 MHz. The MPC107 acts as either a master or slave device on the PCI MPC107 Block Diagram bus and contains a PCI bus arbitration unit 60x Bus which reduces the need for an equivalent Memory Data external unit thus reducing the total system Data Path ECC / Parity complexity and cost.
    [Show full text]
  • The Impulse Memory Controller
    IEEE TRANSACTIONS ON COMPUTERS, VOL. 50, NO. 11, NOVEMBER 2001 1 The Impulse Memory Controller John B. Carter, Member, IEEE, Zhen Fang, Student Member, IEEE, Wilson C. Hsieh, Sally A. McKee, Member, IEEE, and Lixin Zhang, Student Member, IEEE AbstractÐImpulse is a memory system architecture that adds an optional level of address indirection at the memory controller. Applications can use this level of indirection to remap their data structures in memory. As a result, they can control how their data is accessed and cached, which can improve cache and bus utilization. The Impuse design does not require any modification to processor, cache, or bus designs since all the functionality resides at the memory controller. As a result, Impulse can be adopted in conventional systems without major system changes. We describe the design of the Impulse architecture and how an Impulse memory system can be used in a variety of ways to improve the performance of memory-bound applications. Impulse can be used to dynamically create superpages cheaply, to dynamically recolor physical pages, to perform strided fetches, and to perform gathers and scatters through indirection vectors. Our performance results demonstrate the effectiveness of these optimizations in a variety of scenarios. Using Impulse can speed up a range of applications from 20 percent to over a factor of 5. Alternatively, Impulse can be used by the OS for dynamic superpage creation; the best policy for creating superpages using Impulse outperforms previously known superpage creation policies. Index TermsÐComputer architecture, memory systems. æ 1 INTRODUCTION INCE 1987, microprocessor performance has improved at memory. By giving applications control (mediated by the Sa rate of 55 percent per year; in contrast, DRAM latencies OS) over the use of shadow addresses, Impulse supports have improved by only 7 percent per year and DRAM application-specific optimizations that restructure data.
    [Show full text]
  • Optimizing Thread Throughput for Multithreaded Workloads on Memory Constrained Cmps
    Optimizing Thread Throughput for Multithreaded Workloads on Memory Constrained CMPs Major Bhadauria and Sally A. Mckee Computer Systems Lab Cornell University Ithaca, NY, USA [email protected], [email protected] ABSTRACT 1. INTRODUCTION Multi-core designs have become the industry imperative, Power and thermal constraints have begun to limit the replacing our reliance on increasingly complicated micro- maximum operating frequency of high performance proces- architectural designs and VLSI improvements to deliver in- sors. The cubic increase in power from increases in fre- creased performance at lower power budgets. Performance quency and higher voltages required to attain those frequen- of these multi-core chips will be limited by the DRAM mem- cies has reached a plateau. By leveraging increasing die ory system: we demonstrate this by modeling a cycle-accurate space for more processing cores (creating chip multiproces- DDR2 memory controller with SPLASH-2 workloads. Sur- sors, or CMPs) and larger caches, designers hope that multi- prisingly, benchmarks that appear to scale well with the threaded programs can exploit shrinking transistor sizes to number of processors fail to do so when memory is accurately deliver equal or higher throughput as single-threaded, single- modeled. We frequently find that the most efficient config- core predecessors. The current software paradigm is based uration is not the one with the most threads. By choosing on the assumption that multi-threaded programs with little the most efficient number of threads for each benchmark, contention for shared data scale (nearly) linearly with the average energy delay efficiency improves by a factor of 3.39, number of processors, yielding power-efficient data through- and performance improves by 19.7%, on average.
    [Show full text]
  • COSC 6385 Computer Architecture - Multi-Processors (IV) Simultaneous Multi-Threading and Multi-Core Processors Edgar Gabriel Spring 2011
    COSC 6385 Computer Architecture - Multi-Processors (IV) Simultaneous multi-threading and multi-core processors Edgar Gabriel Spring 2011 Edgar Gabriel Moore’s Law • Long-term trend on the number of transistor per integrated circuit • Number of transistors double every ~18 month Source: http://en.wikipedia.org/wki/Images:Moores_law.svg COSC 6385 – Computer Architecture Edgar Gabriel 1 What do we do with that many transistors? • Optimizing the execution of a single instruction stream through – Pipelining • Overlap the execution of multiple instructions • Example: all RISC architectures; Intel x86 underneath the hood – Out-of-order execution: • Allow instructions to overtake each other in accordance with code dependencies (RAW, WAW, WAR) • Example: all commercial processors (Intel, AMD, IBM, SUN) – Branch prediction and speculative execution: • Reduce the number of stall cycles due to unresolved branches • Example: (nearly) all commercial processors COSC 6385 – Computer Architecture Edgar Gabriel What do we do with that many transistors? (II) – Multi-issue processors: • Allow multiple instructions to start execution per clock cycle • Superscalar (Intel x86, AMD, …) vs. VLIW architectures – VLIW/EPIC architectures: • Allow compilers to indicate independent instructions per issue packet • Example: Intel Itanium series – Vector units: • Allow for the efficient expression and execution of vector operations • Example: SSE, SSE2, SSE3, SSE4 instructions COSC 6385 – Computer Architecture Edgar Gabriel 2 Limitations of optimizing a single instruction
    [Show full text]
  • WP127: "Embedded System Design Considerations" V1.0 (03/06/2002)
    White Paper: Virtex-II Series R WP127 (v1.0) March 6, 2002 Embedded System Design Considerations By: David Naylor Embedded systems see a steadily increasing bandwidth mismatch between raw processor MIPS and surrounding components. System performance is not solely dependent upon processor capability. While a processor with a higher MIPS specification can provide incremental system performance improvement, eventually the lagging surrounding components become a system performance bottleneck. This white paper examines some of the factors contributing to this. The analysis of bus and component performance leads to the conclusion that matching of processor and component performance provides a good cost-performance trade-off. © 2002 Xilinx, Inc. All rights reserved. All Xilinx trademarks, registered trademarks, patents, and disclaimers are as listed at http://www.xilinx.com/legal.htm. All other trademarks and registered trademarks are the property of their respective owners. All specifications are subject to change without notice. WP127 (v1.0) March 6, 2002 www.xilinx.com 1 1-800-255-7778 R White Paper: Embedded System Design Considerations Introduction Today’s systems are composed of a hierarchy, or layers, of subsystems with varying access times. Figure 1 depicts a layered performance pyramid with slower system components in the wider, lower layers and faster components nearer the top. The upper six layers represent an embedded system. In descending order of speed, the layers in this pyramid are as follows: 1. CPU 2. Cache memory 3. Processor Local Bus (PLB) 4. Fast and small Static Random Access Memory (SRAM) subsystem 5. Slow and large Dynamic Random Access Memory (DRAM) subsystem 6.
    [Show full text]
  • AN 436: Using DDR3 SDRAM in Stratix III and Stratix IV Devices.Pdf
    AN 436: Using DDR3 SDRAM in Stratix III and Stratix IV Devices © November 2008 AN-436-4.0 Introduction DDR3 SDRAM is the latest generation of DDR SDRAM technology, with improvements that include lower power consumption, higher data bandwidth, enhanced signal quality with multiple on-die termination (ODT) selection and output driver impedance control. DDR3 SDRAM brings higher memory performance to a broad range of applications, such as PCs, embedded processor systems, image processing, storage, communications, and networking. Although DDR2 SDRAM is currently the more popular SDRAM, to save system power and increase system performance you should consider using DDR3 SDRAM. DDR3 SDRAM offers lower power by using 1.5 V for the supply and I/O voltage compared to the 1.8-V supply and I/O voltage used by DDR2 SDRAM. DDR3 SDRAM also has better maximum throughput compared to DDR2 SDRAM by increasing the data rate per pin and the number of banks (8 banks are standard). 1 The Altera® ALTMEMPHY megafunction and DDR3 SDRAM high-performance controller only support local interfaces running at half the rate of the memory interface. Altera Stratix® III and Stratix IV devices support DDR3 SDRAM interfaces with dedicated DQS, write-, and read-leveling circuitry. Table 1 displays the maximum clock frequency for DDR3 SDRAM in Stratix III devices. Table 1. DDR3 SDRAM Maximum Clock Frequency Supported in Stratix III Devices (Note 1), (2) Speed Grade fMAX (MHz) –2 533 (3) –3 and I3 400 –4, 4L, and I4L at 1.1 V 333 (4), (5) –4, 4L, and I4L at 0.9 V Not supported Notes to Table 1: (1) Numbers are preliminary until characterization is final.
    [Show full text]
  • Computer Architecture Lecture 12: Memory Interference and Quality of Service
    Computer Architecture Lecture 12: Memory Interference and Quality of Service Prof. Onur Mutlu ETH Zürich Fall 2017 1 November 2017 Summary of Last Week’s Lectures n Control Dependence Handling q Problem q Six solutions n Branch Prediction n Trace Caches n Other Methods of Control Dependence Handling q Fine-Grained Multithreading q Predicated Execution q Multi-path Execution 2 Agenda for Today n Shared vs. private resources in multi-core systems n Memory interference and the QoS problem n Memory scheduling n Other approaches to mitigate and control memory interference 3 Quick Summary Papers n "Parallelism-Aware Batch Scheduling: Enhancing both Performance and Fairness of Shared DRAM Systems” n "The Blacklisting Memory Scheduler: Achieving High Performance and Fairness at Low Cost" n "Staged Memory Scheduling: Achieving High Performance and Scalability in Heterogeneous Systems” n "Parallel Application Memory Scheduling” n "Reducing Memory Interference in Multicore Systems via Application-Aware Memory Channel Partitioning" 4 Shared Resource Design for Multi-Core Systems 5 Memory System: A Shared Resource View Storage 6 Resource Sharing Concept n Idea: Instead of dedicating a hardware resource to a hardware context, allow multiple contexts to use it q Example resources: functional units, pipeline, caches, buses, memory n Why? + Resource sharing improves utilization/efficiency à throughput q When a resource is left idle by one thread, another thread can use it; no need to replicate shared data + Reduces communication latency q For example,
    [Show full text]