CPU Clock Rate DRAM Access Latency Growing

Total Page:16

File Type:pdf, Size:1020Kb

CPU Clock Rate DRAM Access Latency Growing CPU clock rate . Apple II – MOS 6502 (1975) 1~2MHz . Original IBM PC (1981) – Intel 8080 4.77MHz CS/COE1541: Introduction to . Intel 486 50MHz Computer Architecture . DEC Alpha Memory hierarchy • EV4 (1991) 100~200MHz • EV5 (1995) 266~500MHz • EV6 (1998) 450~600MHz Sangyeun Cho • EV67 (1999) 667~850MHz Computer Science Department University of Pittsburgh . Today’s processors – 2~5GHz CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 2 DRAM access latency Growing gap . How long would it take to fetch a memory block from main memory? • Time to determine that data is not on-chip (cache miss resolution time) • Time to talk to memory controller (possibly on a separate chip) • Possible queuing delay at memory controller • Time for opening a DRAM row • Time for selecting a portion in the selected row (CAS latency) • Time to transfer data from DRAM to CPU Possibly through a separate chip • Time to fill caches as needed . The total latency can be anywhere from 100~400 CPU cycles CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 3 4 Idea of memory hierarchy Memory hierarchy goals . To create an illusion of “fast and large” main memory Smaller CPU Faster Regs • As fast as a small SRAM-based memory More expensive per byte • As large (and cheap) as DRAM-based memory L1 cache . To provide CPU with necessary data and instructions as SRAM quickly as possible • Frequently used data must be placed near CPU L2 cache • “Cache hit” when CPU finds its data in cache SRAM • Cache hit rate = # of cache hits/# cache accesses • Avg. mem. access lat. (AMAL) = cache hit time + (1 – cache hit × Main memory Larger rate) miss penalty Slower DRAM • To decrease AMAL, decrease ____, increase ____, and reduce ____. Cheaper per byte . Cache helps reduce bandwidth pressure on memory bus Hard disk • Cache becomes a “filter” Magnetics CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 5 6 Cache Cache operations . (From Webster online): 1. a: a hiding place especially for concealing and preserving provisions or implements b: a secure place of storage … 3. a computer memory with No very short access time used for storage of frequently or &A[0] Main CPU Cache Get A[0] memory recently used instructions or data–called also cache memory Data Data Do I have A[0]? . Notions of cache memory were used in as early as 1967 in IBM mainframe machines for (i=0; i<N; i++) { C[i] = A[i] + B[i]; SUM = SUM + C[i]; } CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 7 8 Cache operations Cache organization . CPU-cache operations . Caches use “blocks” or “lines” (block > byte) as their granule of management • CPU: Read request . Memory > cache: we can only keep a subset of memory blocks Cache: hit, provide data . Cache is in essence a fixed-width hash table; the memory blocks kept Cache: miss, talk to main memory; hold CPU until data arrives; fill cache (if in a cache are thus associated with their addresses (or “tagged”) needed, kick out old data); provide data • CPU: Write request addr addr addr table data table Memory Cache: hit, update data N bits N bits Cache: miss, put data in a buffer (pretend data is written); let CPU proceed . Cache-main memory operations N • Cache: Read request 2 words supporting circuitry Memory: provide data • Cache: Write request Memory: update memory Hit/miss Specified data word Specified data word CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 9 10 L1 cache vs. L2 cache Key questions . Their basic parameters are similar . Where to place a block? • Associativity, block size, and cache size (the capacity of data array) . How to find a block? . Address used to index • L1: typically virtual address (to quickly index first) . Which block to replace for a new block? Using a virtual address causes some complexities • L2: typically physical address Physical address is available by then . How to handle a write? . System visibility • Writes make a cache design much more complex! • L1: not visible • L2: page coloring can affect hit rate . Hardware organization (esp. in multicores) • L1: private • L2: often shared among cores CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 11 12 Where to place a block? Direct-mapped cache . Block placement is a matter of mapping . 2B byte block M . If you have a simple rule to place data, you can find them later . 2 entries using the same rule . N-bit address (N-B-M) bits 2B bytes (N-B-M) bits M bits B bits Address Tag Data =? Hit/miss Data CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 13 14 2-way set-associative cache Fully associative cache . 2B byte block . 2B byte block . 2M entries . 2M entries . N-bit address . N-bit address 2M entries . CAM (Content addressable memory) N bits • Input: content (N-B) bits 2B bytes • Output: index Address DM Cache DM Cache • Used for the tag memory 2(M-1) entries 2(M-1) entries (N-B) bits B bits Tag M-bit index Data Address Hit/miss Data Hit/miss Data CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 15 16 Why caches work (or do not work) Which block to replace? . Principle of locality . Which block to replace, to make room for a new block on a miss? • Temporal locality . Goal: minimize the # of total misses If the location A is accessed now, it’ll be accessed again soon . Trivial in a direct-mapped cache • Spatial locality If the location A is accessed now, the location nearby (e.g., A+1) will be . N choices in N-way associative cache accessed soon . What is the optimal policy? . Can you explain how locality is manifested in your program (at the • MRU (most remotely used) is considered optimal source code level)? • This is an oracle scheme – we do not know the future • Data • Instructions . Replacement approaches • LRU (least recently used) – look at the past to predict the future . Can you write two programs doing the same thing, each having • FIFO (first in first out) – honor the new ones • A high degree of locality • Random – don’t remember anything • Very low locality • Cost-based – what is the cost (e.g., latency) of bringing this block again? CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 17 18 How to handle a write? Write-through vs. write-back . Design considerations . L1 cache: advantages of write-through + no-write-allocate • Performance • Simple control • No stalls for evicting dirty data on L1 miss with L2 hit • Design complexity • Avoids L1 cache pollution with results that are not read for a while . Allocation policy (on a miss) • Avoids problems with coherence (L2 is consistent with L1) • Write-allocate • Allows efficient transient error handling: parity protection in L1 and ECC in L2 • What about high traffic between L1 and L2, esp. in a multicore processor? • No-write-allocate • Write-validate . L2 cache: advantages of write-back + write-allocate . Update policy • Typically reduces overall bus traffic by filtering all L1 write-through traffic • Write-through • Better able to capture temporal locality of infrequently written memory locations • Provides a safety net for programs where write-allocate helps a lot • Write-back Garbage-collected heaps Write-followed-by-read situations Linking loaders (if unified cache, need not be flushed before execution) . Typical combinations • Write-back with write-allocate . Some ISA/caches support explicitly installing cache blocks with empty • Write-through with no-write-allocate contents or common values (e.g., zero) CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 19 20 Write buffer Write buffer . Waiting for a write to main memory is inefficient for a write- . A write buffer holds data while it is waiting to be written to through scheme (slow) memory; frees processor to continue executing • Every write to cache causes a write to main memory and the processor instructions stalls for that write • Processor thinks that write was done (quickly) . Example: base CPI = 1.2, 13% of all instructions are a store, . When write buffer is empty 10 cycles to write to memory • Send written data to buffer and eventually to main memory × • CPI = 1.2 + 0.13 10 = 2.5 . When write buffer is full, • More than doubled the CPI by waiting… • Wait for write buffer to become empty; then send data to write buffer CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 21 22 Write buffer Revisiting write-back cache . Is one write buffer enough? Tag array Data array . Probably not • Writes may occur faster than they can be handled by the memory system • Write tend to occur in bursts; memory may be fast but can’t handle several writes at once D V Address tag . Address tag is the “id” Typical buffer size is 4~8, possibly more of the associated data “Dirty” bit is set “Valid” bit is set when data is modified when data is brought in CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh CS/CoE1541: Intro. to Computer Architecture University of Pittsburgh 23 24 Alpha 21264 example More examples 64B block .
Recommended publications
  • 4Gb: X16 DDR4 SDRAM Features DDR4 SDRAM EDY4016A - 256Mb X 16
    4Gb: x16 DDR4 SDRAM Features DDR4 SDRAM EDY4016A - 256Mb x 16 Features • Databus write cyclic redundancy check (CRC) • Per-DRAM addressability •VDD = VDDQ = 1.2V ±60mV • Connectivity test •VPP = 2.5V, –125mV/+250mV • JEDEC JESD-79-4 compliant • On-die, internal, adjustable VREFDQ generation • 1.2V pseudo open-drain I/O Options1 Marking •TC of 0°C to 95°C • Revision A – 64ms, 8192-cycle refresh at 0°C to 85°C • FBGA package size – 32ms at 85°C to 95°C – 96-ball (7.5mm x 13.5mm) BG • 8 internal banks: 2 groups of 4 banks each • Timing – cycle time •8n-bit prefetch architecture – 0.625ns @ CL = 24 (DDR4-3200) -JD • Programmable data strobe preambles – 0.750ns @ CL = 19 (DDR4-2666) -GX • Data strobe preamble training – 0.833ns @ CL = 16 (DDR4-2400) -DR • Command/Address latency (CAL) • Packaging • Multipurpose register READ and WRITE capability – Lead-free (RoHS-compliant) and hal- - F • Write and read leveling ogen-free • Self refresh mode • Low-power auto self refresh (LPASR) Notes: 1. Not all options listed can be combined to • Temperature controlled refresh (TCR) define an offered product. Use the part • Fine granularity refresh catalog search on http://www.micron.com • Self refresh abort for available offerings. • Maximum power saving 2. Restricted and limited availability. • Output driver calibration • Nominal, park, and dynamic on-die termination (ODT) • Data bus inversion (DBI) for data bus • Command/Address (CA) parity Table 1: Key Timing Parameters Speed Grade Data Rate (MT/s) Target tRCD-tRP-CL tRCD (ns) tRP (ns) CL (ns) -JD1 3200 24-24-24 15.0 15.0 15.0 -GX2 2666 19-19-19 14.25 14.25 14.25 -DR3 2400 16-16-16 13.32 13.32 13.32 Notes: 1.
    [Show full text]
  • Dynamic Rams from Asynchrounos to DDR4
    Dynamic RAMs From Asynchrounos to DDR4 PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sun, 10 Feb 2013 17:59:42 UTC Contents Articles Dynamic random-access memory 1 Synchronous dynamic random-access memory 14 DDR SDRAM 27 DDR2 SDRAM 33 DDR3 SDRAM 37 DDR4 SDRAM 43 References Article Sources and Contributors 48 Image Sources, Licenses and Contributors 49 Article Licenses License 50 Dynamic random-access memory 1 Dynamic random-access memory Dynamic random-access memory (DRAM) is a type of random-access memory that stores each bit of data in a separate capacitor within an integrated circuit. The capacitor can be either charged or discharged; these two states are taken to represent the two values of a bit, conventionally called 0 and 1. Since capacitors leak charge, the information eventually fades unless the capacitor charge is refreshed periodically. Because of this refresh requirement, it is a dynamic memory as opposed to SRAM and other static memory. The main memory (the "RAM") in personal computers is dynamic RAM (DRAM). It is the RAM in laptop and workstation computers as well as some of the RAM of video game consoles. The advantage of DRAM is its structural simplicity: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach very high densities. Unlike flash memory, DRAM is volatile memory (cf. non-volatile memory), since it loses its data quickly when power is removed. The transistors and capacitors used are extremely small; billions can fit on a single memory chip.
    [Show full text]
  • In-DRAM Cache Management for Low Latency and Low Power 3D-Stacked Drams
    micromachines Article In-DRAM Cache Management for Low Latency and Low Power 3D-Stacked DRAMs Ho Hyun Shin 1,2 and Eui-Young Chung 2,* 1 Samsung Electronics Company, Ltd., Hwasung 18448, Korea; [email protected] 2 School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea * Correspondence: [email protected]; Tel.: +82-2-2123-5866 Received: 24 December 2018; Accepted: 5 February 2019; Published: 14 February 2019 Abstract: Recently, 3D-stacked dynamic random access memory (DRAM) has become a promising solution for ultra-high capacity and high-bandwidth memory implementations. However, it also suffers from memory wall problems due to long latency, such as with typical 2D-DRAMs. Although there are various cache management techniques and latency hiding schemes to reduce DRAM access time, in a high-performance system using high-capacity 3D-stacked DRAM, it is ultimately essential to reduce the latency of the DRAM itself. To solve this problem, various asymmetric in-DRAM cache structures have recently been proposed, which are more attractive for high-capacity DRAMs because they can be implemented at a lower cost in 3D-stacked DRAMs. However, most research mainly focuses on the architecture of the in-DRAM cache itself and does not pay much attention to proper management methods. In this paper, we propose two new management algorithms for the in-DRAM caches to achieve a low-latency and low-power 3D-stacked DRAM device. Through the computing system simulation, we demonstrate the improvement of energy delay product up to 67%. Keywords: 3D-stacked; DRAM; in-DRAM cache; low-latency; low-power 1.
    [Show full text]
  • Why DDR4: Less Filling, Tastes Great
    These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited. DDR4 HP Special Edition These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited. These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited. DDR4 HP Special Edition by Bill Howard, Hermann Wienchol, and Lidia Warnes These materials are © 2014 John Wiley & Sons, Inc. Any dissemination, distribution, or unauthorized use is strictly prohibited. DDR4 For Dummies®, HP Special Edition Published by John Wiley & Sons, Inc. 111 River St. Hoboken, NJ 07030-5774 www.wiley.com Copyright © 2014 by John Wiley & Sons, Inc. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/ go/permissions. Trademarks: Wiley, For Dummies, the Dummies Man logo, The Dummies Way, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries, and may not be used without written permis- sion. HP and the HP logo are registered trademarks of HP.
    [Show full text]
  • DDR Basics, Register Configurations & Pitfalls
    July, 2009 DDR Basics, Register Configurations & Pitfalls Mazyar Razzaz, Applications Engineer TM Freescale™ and the Freescale logo are trademarks of Freescale Semiconductor, Inc. All other product or service names are the property of their respective owners. © Freescale Semiconductor, Inc. 2009. 1 Agenda ►Basic DDR SDRAM • Memory Organization & Operation • Read and write timing ►Power QUICC DDR Controllers • Features & Capabilities ►Power QUICC DDR Controllers • Initialization & Register Configurations ►Power QUICC DDR Controllers • Pitfalls / Debug Tips Freescale Semiconductor Confidential and Proprietary Information. Freescale™ and the Freescale logo are trademarks TM of Freescale Semiconductor, Inc. All other product or service names are the property of their respective owners. © Freescale Semiconductor, Inc. 2009. 2 Basic DDR SDRAM Memory Organization & Operation Freescale™ and the Freescale logo are trademarks of Freescale Semiconductor, Inc. All other product or TM service names are the property of their respective owners. © Freescale Semiconductor, Inc. 2009. 3 Single Transistor Memory Cell Column (bit) line Row (word) line G SD “1” => Vcc “0” => Gnd “precharged” to Vcc/2 Cbit Ccol Vcc/2 Freescale Semiconductor Confidential and Proprietary Information. Freescale™ and the Freescale logo are trademarks TM of Freescale Semiconductor, Inc. All other product or service names are the property of their respective owners. © Freescale Semiconductor, Inc. 2009. 4 Single Transistor Memory Cell Column (bit) line Row (word) line G SD “1” => Vcc “0” => Gnd “precharged” to Vcc/2 Cbit Ccol Storage Capacitor Vcc/2 Freescale Semiconductor Confidential and Proprietary Information. Freescale™ and the Freescale logo are trademarks TM of Freescale Semiconductor, Inc. All other product or service names are the property of their respective owners.
    [Show full text]
  • DDR Memories Comparison and Overview
    Joaquin Romo DDR Memories Comparison and overview We realize that one of the most important aspects of a DDR1, DDR2 and DDR3 memories are powered up with 2.5, computer is its capability to store large amounts of information 1.8 and 1.5V supply voltages respectively, thus producing less in what we normally call “memory.” Specifically, it’s random heat and providing more efficiency in power management than access memory (RAM), and it holds volatile information that normal SDRAM chipsets, which use 3.3V. can be accessed quickly and directly. And considering the ever growing system need for speed and efficiency, understanding Temporization is another characteristic of DDR memories. double-data-rate (DDR) memory is important to system Memory temporization is given through a series of numbers, developers. such as 2-3-2-6-T1, 3-4-4-8 or 2-2-2-5 for DDR1. These numbers indicate the number of clock pulses that it takes the With improvements in processor speeds, RAM memory has memory to perform a certain operation—the smaller the number, evolved into high performance RAM chipsets called DDR the faster the memory. synchronous dynamic RAM (SDRAM). It doubles the processing rate by making a data fetch on both the rising and falling-edge The operations that these numbers represent are the following: of a clock cycle. This is in contrast to the older single-data-rate CL- tRCD – tRP – tRAS - CMD. To understand them, you have (SDR) SDRAM that makes a data fetch on only one edge of the to keep in mind that the memory is internally organized as a clock cycle.
    [Show full text]
  • Annex J: Serial Presence Detects for DDR2 SDRAM (Revision 1.3) 1.0 Introduction 1.1 Address
    JEDEC Standard No. 21-C Page 4.1.2.10 – 1 Annex J: Serial Presence Detects for DDR2 SDRAM (Revision 1.3) 1.0 Introduction This appendix describes the serial presence detect values for the DDR2 version of the synchronous DRAM modules. These presence detect values are those referenced in the SPD standard document for ‘Specific Features’. The fol- lowing SPD fields will occur in the order presented in section 1.1. Further descriptions of Bytes 0 and 1 are found in the SPD standard. Further description of Byte 2 is found in Appendix A of the SPD standard. All unused entries will be coded as 00h. All unused bits in defined bytes will be coded as 0 except where noted. “Unused” includes reserved or TBD values. 1.1 Address map The following is the SPD address map for DDR2 SDRAM. It describes where the individual lookup table entries will be held in the serial EEPROM. Byte Number Function Described Notes 0 Number of Serial PD Bytes written during module production 1 1 Total number of Bytes in Serial PD device 2 2 Fundamental Memory Type (FPM, EDO, SDRAM, DDR, DDR2…) 3 Number of Row Addresses on this assembly 4 Number of Column Addresses on this assembly 5 Number of DIMM Ranks 6 Data Width of this assembly 7 Reserved 8 Voltage Interface Level of this assembly 9 SDRAM Cycle time at Maximum Supported CAS Latency (CL), CL=X 3 10 SDRAM Access from Clock 11 DIMM configuration type (Non-parity, Parity or ECC) 12 Refresh Rate/Type 3, 4 13 Primary SDRAM Width 14 Error Checking SDRAM Width 15 Reserved 16 SDRAM Device Attributes: Burst Lengths Supported
    [Show full text]
  • Memories in Computers Part 2: DDR Sdrams
    111.pdf Memories in Computers—Part 2: DDR SDRAMs A SunCam online continuing education course Memories in Computers Part 2: DDR SDRAMs by Dr. William R. Huber, P.E. www.SunCam.com Copyright 2011 William R. Huber (updated 2018) Page 1 of 43 111.pdf Memories in Computers—Part 2: DDR SDRAMs A SunCam online continuing education course A. Introduction To understand the operation and advantages of Double Data Rate (DDR) Synchronous Dynamic Random Access Memories (DDR SDRAMs), we will start by reviewing the architecture of a single data rate (SDR) SDRAM. In keeping with industry practice, the balance of this course will refer to SDR SDRAMs simply as SDRAMs. A Micron data sheet provides a functional block diagram.1 1 Micron MT48LC128M4A2 512Mb SDRAM Data Sheet, Rev. M 6/10 EN, page 8 www.SunCam.com Copyright 2011 William R. Huber (updated 2018) Page 2 of 43 111.pdf Memories in Computers—Part 2: DDR SDRAMs A SunCam online continuing education course SDRAM architecture is very similar to that of an asynchronous DRAM, with the addition of Command Inputs (CS#, WE#, CAS# and RAS#)2, Clock Inputs (CKE, CLK) and Control Logic including Command Decode and Mode Register. Some of the significant differences in the operation of an SDRAM as compared to a DRAM are as follows: • SDRAM operation is initiated by the low to high transition of the external clock, and data bits are input or output in synchronism with the external clock; • In an SDRAM, the operational mode (ACTIVE, READ, WRITE, REFRESH, etc.) is established by a command issued at the beginning of the cycle; • The SDRAM command is defined by the logic state of several inputs (e.g.; RAS#, CAS#, WE# and A10) when CS# (Chip Select) is low, CKE (Clock Enable) is high, and CLK (external Clock) transitions from low to high; • The SDRAM is organized into banks, which can operate virtually independently of each other.
    [Show full text]
  • DDR3 SDRAM Memory Controller N.Purnachand1, K Hari Kishore2, K.V.Pavankalyan3, K.Divya4, A.Karthik5, Khadar Bhasha6 1,2,3,4,5,6D
    International Journal of Pure and Applied Mathematics Volume 115 No. 7 2017, 507-512 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu Special Issue ijpam.eu DDR3 SDRAM Memory Controller N.Purnachand 1, K Hari Kishore 2, K.V.Pavankalyan 3, K.Divya 4, A.Karthik 5, Khadar Bhasha 6 1,2,3,4,5,6 Department of ECE, K L University, Vaddeswaram, Guntur, A.P. India [email protected], [email protected] Abstract:To overcome the problem of latency in particular memory location. The DDR SDRAM SDRAM, data rate schemes were introduced. In these Controller architecture is shown in Figure 2. scheme the data will worked in according with clock It consists of three modules: pulses such that it automatically adjusts the latency. Till 1) Main control module now Double data rate (DDR) based SDRAM, DDR2- 2) Signal generation module SDRAM and DDR3-SDRAM were implemented. In 3) Data path module. our condition modified DDR3-SDRAM was The main control module has two state machines and a implemented. To enhance the latency of the system and refresh counter. The two state machines are for area utilization. This design will overcome the issues initialization of the SDRAM and for generating the such as data rate interference, high delay and limited commands to the SDRAM. They generate iState and period offence. DDR3 SDRAM controller helps cState outputs according to the system interface control memory interfacing and reduces the problem of signals. The signal generation module now generates synchronization. This high performance RAM the address and command signals depending upon the overcomes the limitations and further can be extended iState and cState.
    [Show full text]
  • DRAM Terms and Glossary Rev 5A DIMM Dual In-Line Memory Module NF No Function, DNU, T 1T, 2T, 3T, Etc
    MindShare DRAM Quick Reference Guide (Rev 5a) DRAM Terms and Glossary Rev 5a DIMM Dual In-line Memory Module NF No Function, DNU, T 1T, 2T, 3T, etc. timing, By John Swindle, Precept Technologies, Inc. DM Data Mask connected on die as 1N, 2N, etc. Please report errors or additions to DNU Do Not Use, NF, connected on die OCD Off-Chip Driver tCK Time for one tick of CK [email protected] DQ Data ODT On-Die Termination t JEDEC timing spec XXX XXX T JEDEC Temperature spec XXX 3DS 3 Dimensional Silicon or Stack DQS Data Strobe OTF On The Fly XXX DRAM Dynamic Random Access Memory PASR Partial Array Self Refresh TDQS Termination DQS, not RDQS ab, AB All Banks (LPDDRs), ECC Error Checking and Correcting pb per bank TSOP Thin Small Outline Package AP in PC DDRs eMMC Embedded Multi-Media Card TSV Through Silicon Via ACT Active aka Activate command PC3 DDR3, informally EMRS Extended Mode Register Set TUF Temperature Update Flag AL Additive Latency, 0 to 5 for DDR2, PC4 DDR4, informally command PCH Intel Platform Controller Hub UDIMM Unbuffered DIMM 0, CL-1, or CL-2 for DDR3 & 4, FBDIMM Fully-Buffered DIMM PDA Per-DRAM Addressability UDM Upper Data Mask RL = AL + CL + PL FBGA Fine-pitch BGA UDQS Upper Data Strobe AMB Advanced Memory Buffer PEC SMB Packet Error Checking GDDR Graphics DDR, UFS Universal Flash Storage AMBA Advanced Microcontroller Bus PHY Physical Layer not JEDEC DDR1, 2, 3 PL Parity Latency UFSA Universal Flash Storage Association Architecture HBM High Bandwidth Memory PLL Phase-Locked Loop UI Unit Interval, AP Auto Precharge, Precharge
    [Show full text]
  • 256Mb DDR SDRAM REV 1.1 Features Description
    NT5DS64M4BT NT5DS64M4BF NT5DS32M8BT NT5DS32M8BF NT5DS16M16BT NT5DS16M16BF 256Mb DDR SDRAM Features CAS Latency and Frequency • Differential clock inputs (CK and CK) Maximum Operating Frequency (MHz) • Four internal banks for concurrent operation CAS • Data mask (DM) for write data Latency DDR400A DDR400B (-5) (-5T) • DLL aligns DQ and DQS transitions with CK transitions 3 200 200 • Commands entered on each positive CK edge; data and 2.5 200 166 data mask referenced to both edges of DQS • Burst lengths: 2, 4, or 8 • Double data rate architecture: two data transfers per • CAS Latency: 2.5, 3 clock cycle • Auto Precharge option for each burst access • Bidirectional data strobe (DQS) is transmitted and • Auto Refresh and Self Refresh Modes received with data, to be used in capturing data at the receiver • 7.8ms Maximum Average Periodic Refresh Interval • DQS is edge-aligned with data for reads and is center- • SSTL_2 compatible I/O interface aligned with data for writes • VDDQ = 2.6V ± 0.1V • VDD = 2.6V ± 0.1V Description The 256Mb DDR SDRAM is a high-speed CMOS, dynamic tion may be enabled to provide a self-timed row precharge random-access memory containing 268,435,456 bits. It is that is initiated at the end of the burst access. internally configured as a quad-bank DRAM. As with standard SDRAMs, the pipelined, multibank architec- The 256Mb DDR SDRAM uses a double-data-rate architec- ture of DDR SDRAMs allows for concurrent operation, ture to achieve high-speed operation. The double data rate thereby providing high effective bandwidth by hiding row pre- architecture is essentially a 2n prefetch architecture with an charge and activation time.
    [Show full text]
  • Ts6qsj23002-6S
    200PIN DDR2 667 SO-DIMM TTSS66QQSSJJ2233000022--66SS 512MB With 64Mx8 CL5 Description Placement The TS64MSQ64V6J is a 64M x 64bits DDR2-667 SO-DIMM. The TS64MSQ64V6J consists of 8pcs 64Mx8its DDR2 SDRAMs in 60 ball FBGA packages and a 2048 bits serial EEPROM on a 200-pin printed circuit DDD board. The TS64MSQ64V6J is a Dual In-Line Memory BBB AAA Module and is intended for mounting into 200-pin edge connector sockets. EEE Synchronous design allows precise cycle control with the CCC use of system clock. Data I/O transactions are possible on both edges of DQS. Range of operation frequencies, GGG FFF programmable latencies allow the same device to be HHH KKK useful for a variety of high bandwidth, high performance III memory system applications. JJJ Features • RoHS compliant products. • JEDEC standard 1.8V ± 0.1V Power supply PCB: 09-2300 • VDDQ=1.8V ± 0.1V • Max clock Freq: 333MHZ; 667Mb/s/Pin. • Posted CAS • Programmable CAS Latency: 3,4,5 • Programmable Additive Latency :0, 1,2,3 and 4 • Write Latency (WL) = Read Latency (RL)-1 • Burst Length: 4,8(Interleave/nibble sequential) • Programmable sequential / Interleave Burst Mode • Bi-directional Differential Data-Strobe (Single-ended data-strobe is an optional feature) • Off-Chip Driver (OCD) Impedance Adjustment • MRS cycle with address key programs. • On Die Termination • Serial presence detect with EEPROM Transcend Information Inc. 1 200PIN DDR2 667 SO-DIMM TTSS66QQSSJJ2233000022--66SS 512MB With 64Mx8 CL5 Dimensions Pin Identification Side Millimeters Inches Symbol Function A 67.6 ±0.15 2.661 ±0.006 A0~A15, BA0~BA2 Address input B 63.6 2.503 DQ0~DQ63 Data Input / Output.
    [Show full text]