SRAM Edram DRAM

Total Page:16

File Type:pdf, Size:1020Kb

SRAM Edram DRAM Technology Challenges and Directions of SRAM, DRAM, and eDRAM Cell Scaling in Sub- 20nm Generations Jai-hoon Sim SK hynix, Icheon, Korea Outline 1. Nobody is perfect: Main memory & cache memory in the dilemma in sub-20nm era. 2. SRAM Scaling: Diet or Die. 6T SRAM cell scaling crisis & RDF problem. 3. DRAM Scaling: Divide and Rule. Unfinished 1T1C DRAM cell scaling and its technical direction. 4. eDRAM Story: Float like a DRAM & sting like a SRAM. Does logic based DRAM process work? 5. All for one. Reshaping DRAM with logic technology elements. 6. Conclusion. 2 Memory Hierarchy L1$ L2$ SRAM Higher Speed (< few nS) Working L3$ Better Endurance eDRAM Memory (>1x1016 cycles) Access Speed Access Stt-RAM Main Memory DRAM PcRAM ReRAM Lower Speed Bigger Size Storage Class Memory NAND Density 3 Technologies for Cache & Main Memories SRAM • 6T cell. • Non-destructive read. • Performance driven process Speed technology. eDRAM • Always very fast. • 1T-1C cell. • Destructive read and Write-back needed. • Leakage-Performance compromised process technology. Standby • Smaller than SRAM and faster than Density Power DRAM. DRAM • 1T-1C cell. • Destructive read and Write-back needed. • Leakage control driven process technology. • Not always fast. • Smallest cell and lowest cost per bit. 4 eDRAM Concept: Performance Gap Filler SRAM 20-30X Cell Size Cell eDRAM DRAM 50-100X Random Access Speed • Is there any high density & high speed memory solution that could be 100% integrated into logic SoC? 5 6T-SRAM Cell Operation VDD WL WL DVBL PU Read PG SN SN WL Icell PD BL VSS BL V SN BL BL DD SN PD Read Margin: Write PG PG V Write Margin: SS PU Time 6 DRAM’s Charge Sharing DVBL VS VBL Charge Sharing Write-back VPP WL CS CBL V BL DD 1 SN C V C V C C V S DD B 2 DD S B BL 1/2VDD Initial Charge After Charge Sharing DVBL T d 1 1 V BL SS DVBL VBL VBL VDD 1 CB / CS 2 VBBW Time 1 1 I t L RET if cell leakage included Cell select DVBL VDD 1 CB / CS 2 CS 7 DRAM Scalability Metrics BL Cell WL CS • Cell CS. CB • Parasitic BL CB. Sensing + • Cell leakage current. - + - • BL offset voltage. DV • Cell ION current. BL(OFF) 1 1 I t Speed CB L RET DVBL • WL & BLVDD resistance. BL 1 CB / CS 2 CS 8 DRAM’s Bitline Offset Voltage C K DC D DR DV 0.5 B B n _SA B noise DV BL(OFF) TH _SA n _SA CB n _SA R B BL BL caused by AC effect (RC mismatches) caused by DC effect (VTH mismatch) Data pattern dependent Slope= K fN 2 2 2 2 2 time VTH WKF RDF GOX LER • Bitline pitch & cell architecture are critical to DVTH_SA. • Offset voltage caused by RC effect increases as pull-down speed becomes faster. KeDRAM ~ 10xKDRAM. [K. Itoh, VLSI memory chip design, Springer, 2001.] 9 6T-SRAM Cell VSS VDD BLB PU CPP PD PG WL WL PG PD PU BL VDD VSS M1 Pitch VSS VDD BL Unit Cell WL N-Well WL Contact 2x CPP Gate Active BL VDD VSS = 5x M1 Pitch 10 Contacted Poly Pitch (CPP) CPP CPP (CPP-LG)/2 )/2 [nm] G L - (CPP-LG)/2 CT CT , (CPP Gate G Poly LG CPP, L CPP, n+ n+ (CPP - LG)/2 = CT/2 + Spacer Design Rule F [nm] • Gate length LG shrunk only 5nm during past 4 technology generations (65nm to 22nm). • CPP scaling is mostly driven by Spacer & CT scaling and scaled by 30% every 2 years [ same as design rule F]. 11 SRAM Cell Scaling Trend A 2 130nm Cell Size 10 (CPP- LG ) LG L G B /(CPP ] 2 90nm m m 65nm – L • As (CPP-L )/2 approaches L at G G G C 45nm ) Ratio [nm/nm] 22nm generation, LG effect is no 32nm more negligible in cell size scaling. 22nm • LG should be shrunk by 20~30% A: no LG shrink (same as @22nm) SRAM SRAM Cell Size [ in 15nm generation to keep B: LG shrunk by 20% every generation current cell size scaling trend. C: LG shrunk by 30% every generation (CPP – LG)/2 [nm] • SRAM cell size correlates well with (CPP – LG)/2 until 22nm generation. • For SRAM cell size = 0.01Xmm2, the CT/2 + Spacer = 8.0nm (equivalent to F = 5nm). 12 VTH Impact on SRAM Scaling Icell Icell VTH 32nm VTH 45nm Pass pMOS pMOS Vt 65nm pMOS Vt 180nm 90nm 130nm 1/ WL nMOS Vt [H. Yamauchi, J. STS, Mar. 2009.] nMOS Vt [K. Kuhn, et al, Intel Tech. J., June 2008.] 13 DRAM’s Scaling Analysis (1): IL=0 Case IL t RET 0.5VDD CS DVBL (1) 1 CB / CS [V] When cell leak current = 0, eqn. (1) can be simplified as DD_MIN V VDD _ MIN 2DVBL(OFF) (1 CB / CS ) DVBL(OFF) : bitline offset voltage including sense amplifier device mismatch and other electrical mismatches. CB/CS Ratio • For decreasing CS value, CB or DVBL(OFF) should be reduced in order to keep same MAT size (# of cells per bitline kept same) under same VDD_MIN condition. • Maximum CB/CS value could be simply determined by target VDD_MIN and DVBL(OFF) . 14 DRAM’s Scaling Direction DVBL(OFF) IL Scaling Scaling CB scaling is strongly dependent on cell innovation but is a weak function CB of direct cell size shrink. Prohibited ① If C should be reduced w/o cell innovation, Region S DVBL(OFF) & IL should be improved: ① • Improve SA’s mismatches – DVTH & D. • Improve DCB & DRB. ② • Junction engineering. ② Decreasing CB as CS scales downs: • New cell innovation, Buried WL. CS • New materials for BL: Low-K Dielectric, Air Gap, etc. 15 Cell Capacitor Scaling (1) EOT [Å] 50 SIS or MIS, Concave or Cylinder 40 30 Al2O3 20 MIM, Cylinder 10 HfO2 & Al2O3 7 MIM, Pillar 5 ZrO2 & Al2O3 4 3 High-k Ultra high-k 2000 2002 2004 2006 2008 2010 2012 2014 Year [Source : ITRS 2000~2010] 16 Cell Capacitor Scaling (2) Cs [A.U.] Aspect Ratio 6 100 Expected Cs 3Å 5 80 Pillar Cap. 4Å 4 60 3 40 Cylinder Cap. 2 20 A/R ▶ 1 0 70nm 60nm 50nm 40nm 30nm 20nm 10nm [DRAM Technology Node] [S. Cha, IEDM, 2011] 17 Bitline Capacitance Scaling Reduce BL-WL cap: Buried WL [T. Schloesser et al., IEDM 2008.] Reduce BL length: 4F(8F2) 2F(6F2) 8F2 6F2 BL BL 18 Bitline Capacitance Scaling (3) 6F2 8F2 > 8F2 4F2 Cells/BL of Number 3D Capacitor Ratio S /C B 6F2 C + Buried WL COB Design Rule [nm] 19 eDRAM Technology Trend “eDRAM cell = Fast as SRAM cell but small as DRAM cell” Technologywise: Non-self-aligned cell process. It should be logic technology based. 6-7x bigger cell size. • Logic design rule. but still 3-4x smaller than SRAM. • Logic cell passgate transistor. • Silicided cell junctions. Fast cell read/write access. • Logic transistors for bitline SA & periphery circuits. 100-1000x leakage current. • Low-K/Cu BEOL for bitline and BEOL layers. Shorter retention time. • 3D Cell capacitor module process. Designwise: fast DRAM design with logic VDD range. • Reduce bitline length (16-64 cells/BL). Fast signal development. • Improve low VDD margin: VDD or GND bitline sensing. 20 DRAM vs. eDRAM DRAM eDRAM Process 3 metal DRAM process > 5 metal logic process Technology Cell Size ~ 6F2/cell ~ 30-50F2/cell GOX 5-6nm 2.5-3.5nm Bitline W M1 (Cu) – CUB; W – COB Bitline length > 512 cells/BL 16-32 cells/BL Metal Layers 1(M0) + 3 layers 5-6 layers Cell capacitor Cylinder Cylinder VPP 3.0V < 2.0V VDD 1.2-1.5V 0.9-1.0V Storage junction -- Silicided Leakage current < 1fF/cell@95C ~ 1pA/cell@115C Refresh time 64mS@95C < few mS BL sensing 1/2VDD sensing VDD or GND sensing Read Speed tRCD+tCL=23nS, tRAS+tRP=45nS 3-10nS 21 ML eDRAM Technology: CUB BL (M1) TE Hole Open Top Electrode (TE) Hi-K Dielectric Stacked Bottom Electrode (BE) Contact Stacked Contact GC n+ n+ STI Capacitor [Y. Yamagata, CICC, 2006.] eDRAM Capacitor Process Logic FEOL [[1] Y. Yamagata, CICC, 2006. [2] K. Tu, IEDM, 2011.] 22 Capacitor Under Bitline (CUB) & DT eDRAM BL BLi+1 Contact Gate BLi Active BLi ISO Pitch WLi WLi+1 WLi+2 WLi+3 Cell Size ~ 32F2. CPP_ISO CPP_PG • Cell height is determined by isolation pitch & GC-AA overlay. Height ~ 4F. • Cell width is limited by CPP and poly pitch. Width ~ 8F. • Cell PG device gate length and width limited by retention leakage and operation speed, respectively. PG device Lg = 2F~3F, W = 2F~3F. 23 ML eDRAM Technology: COB+Deep CT M1 (Cu) Triple Stacked BLC Contact Capacitor BL (W) GC [Y. Yamagata, CICC, 2006.] n+ n+ STI • BL (Tungsten) dedicated metal process equipment required. • CBL and RBL decoupled with cell capacitance: Good for eDRAM’s low voltage and high speed operation. But cell capacitor height limited by M1 contact’s aspect ratio. For logic part, M1C limits logic’s high speed & low voltage performance. 24 Capacitor Over Bitline (COB) eDRAM BLi+1 BL BLi+1 Contact Gate BLi Active BLi WLi WLi+1 WLi+2 WLi+3 WLi+4 Cell Size ~ 40F2. • Cell height ~ 4F, cell width ~ 10F. • AA space, BL width and BL-SN spacing are the critical rules. 25 SRAM, eDRAM, & DRAM Cell Scaling Trend SRAM eDRAM ] 2 DRAM m m 0.0228 0.008 Cell Size [ Size Cell 6F2 0.0006 4F2 Design Rule F [nm] 0.0004 26 Memory Cell Size Factor Trend 190.1 228 167.0 170.9 134.9 123.5 2.85x 3.7x 80 52.5 5.0x 44.6 36.4 27.3 8 Cell Size Factor Size Cell 6 6 6 4 Design Rule F [nm] 27 Memory Market Positioning L1 Cache <1nS L2 Cache <5nS eDRAM Speed L3 Cache 3-10nS <10nS DRAM >30nS Memory Size [Mb/mm2] 28 New Memory Technology: Stt-RAM [S.
Recommended publications
  • Error Detection and Correction Methods for Memories Used in System-On-Chip Designs
    International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249 – 8958, Volume-8, Issue-2S2, January 2019 Error Detection and Correction Methods for Memories used in System-on-Chip Designs Gunduru Swathi Lakshmi, Neelima K, C. Subhas ABSTRACT— Memory is the basic necessity in any SoC Static Random Access Memory (SRAM): It design. Memories are classified into single port memory and consists of a latch or flipflop to store each bit of multiport memory. Multiport memory has ability to source more memory and it does not required any refresh efficient execution of operation and high speed performance operation. SRAM is mostly used in cache memory when compared to single port. Testing of semiconductor memories is increasing because of high density of current in the and in hand-held devices. It has advantages like chips. Due to increase in embedded on chip memory and memory high speed and low power consumption. It has density, the number of faults grow exponentially. Error detection drawback like complex structure and expensive. works on concept of redundancy where extra bits are added for So, it is not used for high capacity applications. original data to detect the error bits. Error correction is done in Read Only Memory (ROM): It is a non-volatile memory. two forms: one is receiver itself corrects the data and other is It can only access data but cannot modify data. It is of low receiver sends the error bits to sender through feedback. Error detection and correction can be done in two ways. One is Single cost. Some applications of ROM are scanners, ID cards, Fax bit and other is multiple bit.
    [Show full text]
  • Nanotechnology ? Nram (Nano Random Access
    International Journal Of Engineering Research and Technology (IJERT) IFET-2014 Conference Proceedings INTERFACE ECE T14 INTRACT – INNOVATE - INSPIRE NANOTECHNOLOGY – NRAM (NANO RANDOM ACCESS MEMORY) RANJITHA. T, SANDHYA. R GOVERNMENT COLLEGE OF TECHNOLOGY, COIMBATORE 13. containing elements, nanotubes, are so small, NRAM technology will Abstract— NRAM (Nano Random Access Memory), is one of achieve very high memory densities: at least 10-100 times our current the important applications of nanotechnology. This paper has best. NRAM will operate electromechanically rather than just been prepared to cull out answers for the following crucial electrically, setting it apart from other memory technologies as a questions: nonvolatile form of memory, meaning data will be retained even What is NRAM? when the power is turned off. The creators of the technology claim it What is the need of it? has the advantages of all the best memory technologies with none of How can it be made possible? the disadvantages, setting it up to be the universal medium for What is the principle and technology involved in NRAM? memory in the future. What are the advantages and features of NRAM? The world is longing for all the things it can use within its TECHNOLOGY palm. As a result nanotechnology is taking its head in the world. Nantero's technology is based on a well-known effect in carbon Much of the electronic gadgets are reduced in size and increased nanotubes where crossed nanotubes on a flat surface can either be in efficiency by the nanotechnology. The memory storage devices touching or slightly separated in the vertical direction (normal to the are somewhat large in size due to the materials used for their substrate) due to Van der Waal's interactions.
    [Show full text]
  • Solid State Drives Data Reliability and Lifetime
    Solid State Drives Data Reliability and Lifetime White Paper Alan R. Olson & Denis J. Langlois April 7, 2008 Abstract The explosion of flash memory technology has dramatically increased storage capacity and decreased the cost of non-volatile semiconductor memory. The technology has fueled the proliferation of USB flash drives and is now poised to replace magnetic hard disks in some applications. A solid state drive (SSD) is a non-volatile memory system that emulates a magnetic hard disk drive (HDD). SSDs do not contain any moving parts, however, and depend on flash memory chips to store data. With proper design, an SSD is able to provide high data transfer rates, low access time, improved tolerance to shock and vibration, and reduced power consumption. For some applications, the improved performance and durability outweigh the higher cost of an SSD relative to an HDD. Using flash memory as a hard disk replacement is not without challenges. The nano-scale of the memory cell is pushing the limits of semiconductor physics. Extremely thin insulating glass layers are necessary for proper operation of the memory cells. These layers are subjected to stressful temperatures and voltages, and their insulating properties deteriorate over time. Quite simply, flash memory can wear out. Fortunately, the wear-out physics are well understood and data management strategies are used to compensate for the limited lifetime of flash memory. Floating Gate Flash Memory Cells Flash memory was invented by Dr. Fujio Masuoka while working for Toshiba in 1984. The name "flash" was suggested because the process of erasing the memory contents reminded him of the flash of a camera.
    [Show full text]
  • Embedded DRAM
    Embedded DRAM Raviprasad Kuloor Semiconductor Research and Development Centre, Bangalore IBM Systems and Technology Group DRAM Topics Introduction to memory DRAM basics and bitcell array eDRAM operational details (case study) Noise concerns Wordline driver (WLDRV) and level translators (LT) Challenges in eDRAM Understanding Timing diagram – An example References Slide 1 Acknowledgement • John Barth, IBM SRDC for most of the slides content • Madabusi Govindarajan • Subramanian S. Iyer • Many Others Slide 2 Topics Introduction to memory DRAM basics and bitcell array eDRAM operational details (case study) Noise concerns Wordline driver (WLDRV) and level translators (LT) Challenges in eDRAM Understanding Timing diagram – An example Slide 3 Memory Classification revisited Slide 4 Motivation for a memory hierarchy – infinite memory Memory store Processor Infinitely fast Infinitely large Cycles per Instruction Number of processor clock cycles (CPI) = required per instruction CPI[ ∞ cache] Finite memory speed Memory store Processor Finite speed Infinite size CPI = CPI[∞ cache] + FCP Finite cache penalty Locality of reference – spatial and temporal Temporal If you access something now you’ll need it again soon e.g: Loops Spatial If you accessed something you’ll also need its neighbor e.g: Arrays Exploit this to divide memory into hierarchy Hit L2 L1 (Slow) Processor Miss (Fast) Hit Register Cache size impacts cycles-per-instruction Access rate reduces Slower memory is sufficient Cache size impacts cycles-per-instruction For a 5GHz
    [Show full text]
  • Semiconductor Memories
    Semiconductor Memories Prof. MacDonald Types of Memories! l" Volatile Memories –" require power supply to retain information –" dynamic memories l" use charge to store information and require refreshing –" static memories l" use feedback (latch) to store information – no refresh required l" Non-Volatile Memories –" ROM (Mask) –" EEPROM –" FLASH – NAND or NOR –" MRAM Memory Hierarchy! 100pS RF 100’s of bytes L1 1nS SRAM 10’s of Kbytes 10nS L2 100’s of Kbytes SRAM L3 100’s of 100nS DRAM Mbytes 1us Disks / Flash Gbytes Memory Hierarchy! l" Large memories are slow l" Fast memories are small l" Memory hierarchy gives us illusion of large memory space with speed of small memory. –" temporal locality –" spatial locality Register Files ! l" Fastest and most robust memory array l" Largest bit cell size l" Basically an array of large latches l" No sense amps – bits provide full rail data out l" Often multi-ported (i.e. 8 read ports, 2 write ports) l" Often used with ALUs in the CPU as source/destination l" Typically less than 10,000 bits –" 32 32-bit fixed point registers –" 32 60-bit floating point registers SRAM! l" Same process as logic so often combined on one die l" Smaller bit cell than register file – more dense but slower l" Uses sense amp to detect small bit cell output l" Fastest for reads and writes after register file l" Large per bit area costs –" six transistors (single port), eight transistors (dual port) l" L1 and L2 Cache on CPU is always SRAM l" On-chip Buffers – (Ethernet buffer, LCD buffer) l" Typical sizes 16k by 32 Static Memory
    [Show full text]
  • A 3T Gain Cell Embedded DRAM Utilizing Preferential Boosting for High Density and Low Power On-Die Caches Ki Chul Chun, Pulkit Jain, Jung Hwa Lee, and Chris H
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 46, NO. 6, JUNE 2011 1495 A 3T Gain Cell Embedded DRAM Utilizing Preferential Boosting for High Density and Low Power On-Die Caches Ki Chul Chun, Pulkit Jain, Jung Hwa Lee, and Chris H. Kim, Senior Member, IEEE Abstract—Circuit techniques for enabling a sub-0.9 V logic-com- for achieving low static power and high operating speed. patible embedded DRAM (eDRAM) are presented. A boosted 3T SRAMs have been the embedded memory of choice due to gain cell utilizes Read Word-line (RWL) preferential boosting to their logic compatibility and fast access time. Recently, em- increase read margin and improve data retention time. Read speed is enhanced with a hybrid current/voltage sense amplifier that al- bedded DRAMs (eDRAMs) have been gaining popularity in lows the Read Bit-line (RBL) to remain close to VDD. A regu- the research community due to features such as small cell size, lated bit-line write scheme for driving the Write Bit-line (WBL) is low cell leakage, and non-ratioed circuit operation. There have equipped with a steady-state storage node voltage monitor to over- been a number of successful eDRAM designs based on tradi- come the data ‘1’ write disturbance problem of the PMOS gain cell without introducing another boosted supply for the Write Word- tional 1T1C DRAM cells as well as logic-compatible gain cells line (WWL) over-drive. An adaptive and die-to-die adjustable read [2]–[9]. 1T1C cells are denser than gain cells, but at the cost of a reference bias generator is proposed to cope with PVT variations.
    [Show full text]
  • Hereby the Screen Stands in For, and Thereby Occludes, the Deeper Workings of the Computer Itself
    John Warnock and an IDI graphical display unit, University of Utah, 1968. Courtesy Salt Lake City Deseret News . 24 doi:10.1162/GREY_a_00233 Downloaded from http://www.mitpressjournals.org/doi/pdf/10.1162/GREY_a_00233 by guest on 27 September 2021 The Random-Access Image: Memory and the History of the Computer Screen JACOB GABOURY A memory is a means for displacing in time various events which depend upon the same information. —J. Presper Eckert Jr. 1 When we speak of graphics, we think of images. Be it the windowed interface of a personal computer, the tactile swipe of icons across a mobile device, or the surreal effects of computer-enhanced film and video games—all are graphics. Understandably, then, computer graphics are most often understood as the images displayed on a computer screen. This pairing of the image and the screen is so natural that we rarely theorize the screen as a medium itself, one with a heterogeneous history that develops in parallel with other visual and computa - tional forms. 2 What then, of the screen? To be sure, the computer screen follows in the tradition of the visual frame that delimits, contains, and produces the image. 3 It is also the skin of the interface that allows us to engage with, augment, and relate to technical things. 4 But the computer screen was also a cathode ray tube (CRT) phosphorescing in response to an electron beam, modified by a grid of randomly accessible memory that stores, maps, and transforms thousands of bits in real time. The screen is not simply an enduring technique or evocative metaphor; it is a hardware object whose transformations have shaped the ma - terial conditions of our visual culture.
    [Show full text]
  • Design and Comparative Analysis of Low Power Dynamic Random Access Memory Array Strucyure
    International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 Vol. 4 Issue 04, April-2015 Design and Comparative Analysis of Low Power Dynamic Random Access Memory Array Strucyure Mr. K. Gavaskar Mr. E. Kathikeyan, Ms. S. Rohini, Assistant professor/ECE, Mr. S. Kavinkumar Kongu Engineering College UG scholar/ECE, Erode, India Kongu Engineering College Erode, India Abstract- Memory plays an essential role in the design of structure is simplicity [8] and it is used in high densities. electronic systems where storage of data is required. In this DRAM is used in main memories in personal computers paper comparative analysis of different DRAM (Dynamic and in mainframes [7]. Random Access Memory) cell based memory array structure In modern day processor power dissipation plays a major has been carried out in nanometer scale memories. Modern advanced processor use DRAM cells for chip and program role because of the miniaturization of the chip memory. But the major drawback of DRAM is the power design. So this stack technique can be used in future for dissipation. The major contribution of power dissipation in reducing the power consumption. Further enhancement can DRAM is off-state leakage current. Also the improvement of also be made in the design in future to reduce the delay, power efficiency is difficult in DRAM cells. This paper because stacking of transistors increases delay and this investigates the power dissipation analysis in DRAM cells. DRAM cell can be used effectively. Since the cost and size Here 1T1C dram cell, 3T dram cell, 4T dram cell has been of DRAM cells are less when compared to SRAM cells and designed using TANNER EDA Tool.
    [Show full text]
  • Digital Recorded Announcement Machine DRAM and EDRAM Guide
    297-1001-527 DMS-100 Family Digital Recorded Announcement Machine DRAM and EDRAM Guide BASE09 and up Standard 13.06 August 1999 DMS-100 Family Digital Recorded Announcement Machine DRAM and EDRAM Guide Publication number: 297-1001-527 Product release: BASE09 and up Document release: Standard 13.06 Date: August 1999 1982, 1984, 1985, 1986, 1987, 1988, 1990, 1991, 1993, 1994, 1995, 1996, 1997, 1999 Northern Telecom All rights reserved Printed in the United States of America NORTHERN TELECOM CONFIDENTIAL: The information contained in this document is the property of Northern Telecom. Except as specifically authorized in writing by Northern Telecom, the holder of this document shall keep the information contained herein confidential and shall protect same in whole or in part from disclosure and dissemination to third parties and use same for evaluation, operation, and maintenance purposes only. Information is subject to change without notice. Northern Telecom reserves the right to make changes in design or components as progress in engineering and manufacturing may warrant. This equipment has been tested and found to comply with the limits for a Class A digital device pursuant to Part 15 of the FCC Rules, and the radio interference regulations of the Canadian Department of Communications. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications. Operation of this equipment in a residential area is likely to cause harmful interference in which case the user will be required to correct the interference at the user’s own expense.
    [Show full text]
  • On the Scaling of Electronic Charge-Storing Memory Down to the Size of Molecules
    On the Scaling of Electronic Charge-Storing Memory Down to the Size of Molecules Jacob S. Burnim November 2001 MP 01W0000319 MII TRE McLean, Virginia Submitted to the 2001-02 Intel Science Talent Search Contest On the Scaling of Electronic Charge-Storing Memory Down to the Size of Molecules Jacob S. Burnim November 2001 MP 01W0000319 Sponsor MITRE MSR Program Project No. 51MSR89G Dept. W062 Approved for public release; distribution unlimited. Copyright © 2001 by The MITRE Corporation. All rights reserved. SUMMARY This investigation quantitatively explores the possibility of shrinking electronic, charge-storing, random access memory to the molecular-scale. The results suggest that quantum effects may hinder the operation of electronic memory on such small scales, but this effect is not pronounced enough to prevent its functioning. Thus, it should be possible to build and operate such memory at densities 10,000 to 100,000 times greater than present-day memory. ABSTRACT This paper presents an analysis of the performance impact of scaling present-day micrometer-scale, charge-storing random-access memory (RAM) down to the scale of proposed molecular electronic memory. As a part of this analysis, the likely performance is determined for arrays of molecular-scale memory 10,000 to 100,000 times denser than present-day memory. A combination of classical and quantum mechanical methods are employed to calculate the properties of nanometer-scale devices and memory systems. These calculations suggest that quantum mechanics and other small-scale effects should decrease the capacitance and increase the resistance of molecular-scale circuit components. However, these trends are not pronounced enough to prevent the operation of charge-storing memory on that scale.
    [Show full text]
  • Introduction to Advanced Semiconductor Memories
    CHAPTER 1 INTRODUCTION TO ADVANCED SEMICONDUCTOR MEMORIES 1.1. SEMICONDUCTOR MEMORIES OVERVIEW The goal of Advanced Semiconductor Memories is to complement the material already covered in Semiconductor Memories. The earlier book covered the fol- lowing topics: random access memory technologies (SRAMs and DRAMs) and their application to specific architectures; nonvolatile technologies such as the read-only memories (ROMs), programmable read-only memories (PROMs), and erasable PROMs in both ultraviolet erasable (UVPROM) and electrically erasable (EEPROM) versions; memory fault modeling and testing; memory design for testability and fault tolerance; semiconductor memory reliability; semiconductor memories radiation effects; advanced memory technologies; and high-density memory packaging technologies [1]. This section provides a general overview of the semiconductor memories topics that are covered in Semiconductor Memories. In the last three decades of semiconductor memories' phenomenal growth, the DRAMs have been the largest volume volatile memory produced for use as main computer memories because of their high density and low cost per bit advantage. SRAM densities have generally lagged a generation behind the DRAM. However, the SRAMs offer low-power consumption and high-per- formance features, which makes them practical alternatives to the DRAMs. Nowadays, a vast majority of SRAMs are being fabricated in the NMOS and CMOS technologies (and a combination of two technologies, also referred to as the mixed-MOS) for commodity SRAMs. 1 2 INTRODUCTION TO ADVANCED SEMICONDUCTOR MEMORIES MOS Memory Market ($M) Non-Memory IC Market ($M) Memory % of Total IC Market 300,000 40% 250,000 30% 200,00U "o Q 15 150,000 20% 2 </> a. o 100,000 2 10% 50,000 0 0% 96 97 98 99 00 01* 02* 03* 04* 05* MOS Memory Market ($M) 36,019 29,335 22,994 32,288 49,112 51,646 56,541 70,958 94,541 132,007 Non-Memory IC Market ($M) 78,923 90,198 86,078 97,930 126,551 135,969 148,512 172,396 207,430 262,172 Memory % of Total IC Market 31% .
    [Show full text]
  • A Survey of Architectural Approaches for Managing Embedded DRAM and Non-Volatile On-Chip Caches Sparsh Mittal, Jeffrey S
    A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches Sparsh Mittal, Jeffrey S. Vetter, Dong Li To cite this version: Sparsh Mittal, Jeffrey S. Vetter, Dong Li. A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches. IEEE Transactions on Parallel and Distributed Systems, Institute of Electrical and Electronics Engineers, 2015, pp.14. 10.1109/TPDS.2014.2324563. hal-01102387 HAL Id: hal-01102387 https://hal.archives-ouvertes.fr/hal-01102387 Submitted on 12 Jan 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TPDS.2014.2324563 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTING SYSTEMS 1 A Survey Of Architectural Approaches for Managing Embedded DRAM and Non-volatile On-chip Caches Sparsh Mittal, Member, IEEE, Jeffrey S. Vetter, Senior Member, IEEE, and Dong Li Abstract—Recent trends of CMOS scaling and increasing number of on-chip cores have led to a large increase in the size of on- chip caches.
    [Show full text]