Options for Radiation Tolerant High- Performance Memory Steven M

Total Page:16

File Type:pdf, Size:1020Kb

Options for Radiation Tolerant High- Performance Memory Steven M 1 Options for Radiation Tolerant High- Performance Memory Steven M. Guertin, Member, IEEE, Jean Yang-Scharlotta, Member, IEEE, Raphael Some • Identify impact of RHBD limitations, especially at low Abstract— Memory technologies were reviewed for radiation feature size. effects performance in order to determine the most cost-efficient • For each memory technology, identify cell-level radiation target for a possible memory investment targeted at creating a performance strengths and weaknesses. memory component of general applicability for space use. TID • Identify specific trends in memory controller strengths and SEL trends that had indicated improvement in intrinsic performance based on scaling cannot be trusted. SEE hardening and weaknesses (i.e. Flash memory charge pumps). of memory arrays is prohibitive in terms of cell-level power and • Identify any other strengths or weaknesses of note for area requirements. Radiation-hardened cells and custom memory technologies. controllers are recommended for future memory developments. • Ensure the memory technologies are single-event latchup (SEL) free. I. INTRODUCTION • Identify memory technologies that can provide DRAM- pace missions require memory for many different like performance and/or Flash-memory like performance, Sapplic ations, including computer main memory, buffers, while meeting single- and multiple-bit upset (SBU and MBU) program and data storage, boot ROMs (read only memory), and SEFI rate requirements (with SEFI being the more and other applications. In the specific cases of high density problematic). volatile memory (currently handled with dynamic random- The approach taken for this work was the following. We access memory [DRAM] devices), and high density non- took two generic space environments – one that was heavy-ion volatile memory (currently handled with Flash memory), it dominated, and one that was proton dominated. We identified was determined that total ionizing dose (TID) and single-event the available RHBD suppliers and evaluated the performance effects (SEE) performance were insufficient for high- of their RHBD libraries. We then identified memory reliability deep space systems. A comprehensive review of technologies and their radiation performance and trends. With available memory technologies, coupled with a review of their the last several generations of performance as a starting point, radiation performance, will help identify the memory we projected technology trends into the future for memory technologies with the most potential to benefit from a modest devices. effort to develop a next-generation radiation hardened memory This summary is organized as follows. Background is technology. presented in Section II. Environments considered for this work Radiation performance trends for technologies and devices are presented in Section III. A review of RHBD programs is provide key information about current and future expectations. presented in Section IV. Reviews of individual memory trends Potential high-performance radiation hardened memory is presented in Section V. An exploration of packaging impact devices include radiation hardened by design devices, on radiation response is presented in Section VI. And finally, commercial devices, and hybrids. In this paper, we review the findings are presented in Section VII. radiation hardened by design (RHBD) libraries as well as fabrication options, expected performance, and limitations. In II. BACKGROUND addition, we provide a technology review of a range of The following general assumptions are made in order to advanced memory technologies and their expected radiation focus this work: performance into the future. These analyses provide a basis 1) The general timeframe targeting a future improved from which it will be possible to identify if high-performance product is about 2-4 years. Significant findings that seem to deep space memory requirements can be met now, in the require more than 4 years are noted but not pursued here. Our future, or in the future with the addition of targeted focus in then on memory technology predictions in the next development to remedy specific shortcomings of predicted few years only. performance. 2) Device densities of 256 Mb or larger are required for deep This paper is a formal review of RHBD and commercial space missions. Smaller density devices (and developing cell technologies of potential use for development of a space technologies) are not fully explored. memory. The goals of this paper, are to: 3) Feature size must be 65 nm or smaller for the required • Use appropriate radiation environments for evaluation of density. radiation effects in memory devices and technologies. 4) The types of memories of interest are limited to the • Review available RHBD options and assess radiation following: static random-access ram (SRAM), dynamic RAM performance. (DRAM), NAND Flash, NOR Flash, magnetic RAM 2 (MRAM), spin-torque transfer (STT), ferro-electric RAM radiation hardening by design. That is, by just informing the (FeRAM), phase change RAM (PCRAM), and resistive RAM fabrication facility to construct a device differently, radiation (RRAM). performance can be improved (as opposed to modifying the 5) The types of radiation effects of the greatest concern are actual process the facility uses, which has been termed the following: TID, SBU, MBU, SEFI, SEL, Stuck Bits, and “radiation hardening by process”). There are a handful of Other types of permanent failures suppliers that can create new integrated circuits (ICs) or 6) We identified two use categories of the greatest concern modify their commercial designs, using RHBD methods. – high speed memory to act as computer main memory, and For this effort, it is important to be informed of RHBD- high density non-volatile memory to act as file system storage. supplied memories and how they fit requirements. RHBD capabilities and available densities (related to feature size) are Figure 1: Commercial trends and methods to improve hardness, per Amort [1]. There are some notable examples that deviate from the predicted SEE and TID trend lines. Also, this is changing in the sub-32 nm range as mixed-signal and process changes are making this only a rough guideline. also important. With this information, it is possible to establish the state of the art for memory constructed using RHBD III. ENVIRONMENTS methods, where those efforts are likely to lead in the future, For this paper, we focused on technology performance and whether a custom development using RHBD is feasible. under a subset of key mission environments. These are the In this review, we explore the rad hard by design programs following: low-earth orbit (LEO) at 51.6° inclination is the listed below. Each of the programs has been reviewed and the International Space Station orbit (ISS) [2]; geo-stationary orbit findings are discussed. (GEO); and interplanetary space. GEO orbit includes a portion 1) Boeing 90 nm – as a baseline of the upper Van-Allen belts and thus has a slightly higher TID 2) Cobham 65 nm component than interplanetary space [3]. The main difference 3) BAE 45 nm between the latter environments, as it pertains to mission 4) Boeing 32 nm radiation exposure is that GEO has somewhat higher TID A. Boeing 90 nm levels and generally has SEE rates between ISS and The 90 nm Boeing RHBD program is an example of a well- interplanetary space. For reference, TID rates are around 500 document RHBD library that has been used to make ASICs. rad(Si)/year in LEO and 1-10 krad(Si)/year in GEO Some of the devices made in this technology node, such as the (depending on orbit, solar cycle, and other factors). The TID MAESTRO processor had to be implemented with lower-end targets we were primarily interested in were 100 krad(Si) RHBD in some areas due to the number of SRAM bits used minimum, and a desired capability of 300 krad(Si). and the lack of space to implement the bits on the device. 90 nm is considered too large for the desired device density but is PTIONS IV. RHBD O used to establish trends. This library is considered mature, but It is possible to make standard components perform better it is also old and there is a successor in the newer 32 nm in radiation environments by altering low-level library element program from Boeing. designs (e.g. by adding in feedback loops), altering higher It is important to briefly discuss the observed memory SEE level functionality (like triple module redundancy [TMR]), or rate of 1×10-6/bit-day in ISS and 5×10-6/bit-day in solar by changing the thickness of material stacks or using alternate minimum GCR. These are a few orders of magnitude poorer materials. The collection of all these methods is referred to as performance than bits from other RHBD programs reviewed 3 later. It is believed this happened because Boeing chose to overall bit error rate lower before error correction. Also, the implement better EDAC, along with partial hardening of cells, non-RHBD version of their SRAM cells is about 10-50× better rather than full RHBD of cells. As a result, the rates for on- than the other RHBD providers. The program overview [1] orbit errors in EDAC-protected memories is still very low provides significant information on baseline library elements. (typically it can get down below 1×10-4/system-year, with This library is not expected to migrate to a smaller feature size sufficient scrubbing). The results are summarized in Table 1. in the near future. So, the parameters in the table would be The TID and SEL performance of the RHBD library expected to hold for the next 3-5 years. The logic error/SEFI developed on this fabrication line are both very good. rate shows similar sensitivity between protons and heavy ions, However, this fabrication node may be at a sweet spot for both suggesting that some SEFIs are caused by proton events SEL and TID performance. Later devices use lower voltage, (secondaries). which helps with SEL.
Recommended publications
  • Archiving Online Data to Optical Disk
    ARCHIVING ONLINE DATA TO OPTICAL DISK By J. L. Porter, J. L. Kiesler, and D. A. Stedfast U.S. GEOLOGICAL SURVEY Open-File Report 90-575 Reston, Virginia 1990 U.S. DEPARTMENT OF THE INTERIOR MANUEL LUJAN, JR., Secretary U.S. GEOLOGICAL SURVEY Dallas L. Peck, Director For additional information Copies of this report can be write to: purchased from: Chief, Distributed Information System U.S. Geological Survey U.S. Geological Survey Books and Open-File Reports Section Mail Stop 445 Federal Center, Bldg. 810 12201 Sunrise Valley Drive Box 25425 Reston, Virginia 22092 Denver, Colorado 80225 CONTENTS Page Abstract ............................................................. 1 Introduction ......................................................... 2 Types of optical storage ............................................... 2 Storage media costs and alternative media used for data archival. ......... 3 Comparisons of storage media ......................................... 3 Magnetic compared to optical media ............................... 3 Compact disk read-only memory compared to write-once/read many media ................................... 6 Erasable compared to write-once/read many media ................. 7 Paper and microfiche compared to optical media .................... 8 Advantages of write-once/read-many optical storage ..................... 8 Archival procedure and results ........................................ 9 Summary ........................................................... 13 References ..........................................................
    [Show full text]
  • The Future of Data Storage Technologies
    International Technology Research Institute World Technology (WTEC) Division WTEC Panel Report on The Future of Data Storage Technologies Sadik C. Esener (Panel Co-Chair) Mark H. Kryder (Panel Co-Chair) William D. Doyle Marvin Keshner Masud Mansuripur David A. Thompson June 1999 International Technology Research Institute R.D. Shelton, Director Geoffrey M. Holdridge, WTEC Division Director and ITRI Series Editor 4501 North Charles Street Baltimore, Maryland 21210-2699 WTEC Panel on the Future of Data Storage Technologies Sponsored by the National Science Foundation, Defense Advanced Research Projects Agency and National Institute of Standards and Technology of the United States government. Dr. Sadik C. Esener (Co-Chair) Dr. Marvin Keshner Dr. David A. Thompson Prof. of Electrical and Computer Director, Information Storage IBM Fellow Engineering & Material Sciences Laboratory Research Division Dept. of Electrical & Computer Hewlett-Packard Laboratories International Business Machines Engineering 1501 Page Mill Road Corporation University of California, San Diego Palo Alto, CA 94304-1126 Almaden Research Center 9500 Gilman Drive Mail Stop K01/802 La Jolla, CA 92093-0407 Dr. Masud Mansuripur 650 Harry Road Optical Science Center San Jose, CA 95120-6099 Dr. Mark H. Kryder (Co-Chair) University of Arizona Director, Data Storage Systems Center Tucson, AZ 85721 Carnegie Mellon University Roberts Engineering Hall, Rm. 348 Pittsburgh, PA 15213-3890 Dr. William D. Doyle Director, MINT Center University of Alabama Box 870209 Tuscaloosa, AL 35487-0209 INTERNATIONAL TECHNOLOGY RESEARCH INSTITUTE World Technology (WTEC) Division WTEC at Loyola College (previously known as the Japanese Technology Evaluation Center, JTEC) provides assessments of foreign research and development in selected technologies under a cooperative agreement with the National Science Foundation (NSF).
    [Show full text]
  • 3 Secondary Storage.PDF
    CAPE COMPUTER SCIENCE SECONDARY STORAGE Secondary storage is needed 1. because there is a limit on the size of primary memory (due to cost) and 2. because RAM is volatile and so data needed for future use must be stored somewhere else so that it can be retrieved when necessary. Secondary storage is also used for backup and archives. When we consider secondary storage devices we must bear in mind the following characteristics of each device : Capacity Access speed Access method and portability Floppy Disk This is a 3.5 inch magnetic disk of flexible material which until recently was a standard feature on most microcomputers. Typically it stores 1.44 MB of data. It’s a thin plastic circle coated with a magnetic material and encased in a rigid plastic to protect it. A metal sliding access shuttle opens when the disk is in the machine allowing the read/write head access to the disk. Data can be written to and erased from a floppy disk. A write protect tab can be used to prevent accidental overwriting of data. Before data can be written to a disk, it must be formatted. This prepares the disk for use by creating a magnetic map on the disks surface. This map consists of tracks and sectors. Formatting also prepares the file allocation table (FAT). The address of a file on a floppy disk is comprised of the track number and the sector number. Floppy disks are direct access devices but they are slow compared to hard disks. The floppies great advantage has been its use as a device to help transport small files between machines.
    [Show full text]
  • Unit 5: Memory Organizations
    Memory Organizations Unit 5: Memory Organizations Introduction This unit considers the organization of a computer's memory system. The characteristics of the most important storage technologies are described in detail. Basically memories are classified as main memory and secondary memory. Main memory with many different categories are described in Lesson 1. Lesson 2 focuses the secondary memory including the details of floppy disks and hard disks. Lesson 1: Main Memory 1.1 Learning Objectives On completion of this lesson you will be able to : • describe the memory organization • distinguish between ROM, RAM, PROM, EEPROM and • other primary memory elements. 1.2 Organization Computer systems combine binary digits to form groups called words. The size of the word varies from system to system. Table 5.1 illustrates the current word sizes most commonly used with the various computer systems. Two decades ago, IBM introduced their 8-bit PC. This was Memory Organization followed a few years later by the 16-bit PC AT microcomputer, and already it has been replaced with 32- and 64-bit systems. The machine with increased word size is generally faster because it can process more bits of information in the same time span. The current trend is in the direction of the larger word size. Microcomputer main memories are generally made up of many individual chips and perform different functions. The ROM, RAM, Several types of semi- PROM, and EEPROM memories are used in connection with the conductor memories. primary memory of a microcomputers. The main memory generally store computer words as multiple of bytes; each byte consisting of eight bits.
    [Show full text]
  • SŁOWNIK POLSKO-ANGIELSKI ELEKTRONIKI I INFORMATYKI V.03.2010 (C) 2010 Jerzy Kazojć - Wszelkie Prawa Zastrzeżone Słownik Zawiera 18351 Słówek
    OTWARTY SŁOWNIK POLSKO-ANGIELSKI ELEKTRONIKI I INFORMATYKI V.03.2010 (c) 2010 Jerzy Kazojć - wszelkie prawa zastrzeżone Słownik zawiera 18351 słówek. Niniejszy słownik objęty jest licencją Creative Commons Uznanie autorstwa - na tych samych warunkach 3.0 Polska. Aby zobaczyć kopię niniejszej licencji przejdź na stronę http://creativecommons.org/licenses/by-sa/3.0/pl/ lub napisz do Creative Commons, 171 Second Street, Suite 300, San Francisco, California 94105, USA. Licencja UTWÓR (ZDEFINIOWANY PONIŻEJ) PODLEGA NINIEJSZEJ LICENCJI PUBLICZNEJ CREATIVE COMMONS ("CCPL" LUB "LICENCJA"). UTWÓR PODLEGA OCHRONIE PRAWA AUTORSKIEGO LUB INNYCH STOSOWNYCH PRZEPISÓW PRAWA. KORZYSTANIE Z UTWORU W SPOSÓB INNY NIŻ DOZWOLONY NA PODSTAWIE NINIEJSZEJ LICENCJI LUB PRZEPISÓW PRAWA JEST ZABRONIONE. WYKONANIE JAKIEGOKOLWIEK UPRAWNIENIA DO UTWORU OKREŚLONEGO W NINIEJSZEJ LICENCJI OZNACZA PRZYJĘCIE I ZGODĘ NA ZWIĄZANIE POSTANOWIENIAMI NINIEJSZEJ LICENCJI. 1. Definicje a."Utwór zależny" oznacza opracowanie Utworu lub Utworu i innych istniejących wcześniej utworów lub przedmiotów praw pokrewnych, z wyłączeniem materiałów stanowiących Zbiór. Dla uniknięcia wątpliwości, jeżeli Utwór jest utworem muzycznym, artystycznym wykonaniem lub fonogramem, synchronizacja Utworu w czasie z obrazem ruchomym ("synchronizacja") stanowi Utwór Zależny w rozumieniu niniejszej Licencji. b."Zbiór" oznacza zbiór, antologię, wybór lub bazę danych spełniającą cechy utworu, nawet jeżeli zawierają nie chronione materiały, o ile przyjęty w nich dobór, układ lub zestawienie ma twórczy charakter.
    [Show full text]
  • EE 4504 Computer Organization Overview
    Overview Historically, the limiting factor in a computer’s performance has been memory access time – Memory speed has been slow compared to the speed of the processor EE 4504 – A process could be bottlenecked by the memory Computer Organization system’s inability to “keep up” with the processor Our goal in this section is to study the development of an effective memory Section 3 organization that supports the processing Computer Memory power of the CPU – General memory organization and performance – “Internal” memory components and their use – “External” memory components and their use Reading: Text, chapters 4 and 5 EE 4504 Section 3 1 EE 4504 Section 3 2 1 Terminology Capacity: the amount of information that Access time: can be contained in a memory unit -- – For RAM, the time to address the unit and usually in terms of words or bytes perform the transfer Word: the natural unit of organization in – For non-random access memory, the time to position the R/W head over the desired location the memory, typically the number of bits used to represent a number Memory cycle time: Access time plus any other time required before a second access Addressable unit: the fundamental data can be started element size that can be addressed in the memory -- typically either the word size or Access technique: how are memory individual bytes contents accessed – Random access: Unit of transfer: The number of data » Each location has a unique physical address elements transferred at a time -- usually » Locations can be accessed in any order and bits in main
    [Show full text]
  • Dissertation
    ADAM: A Decentralized Parallel Computer Architecture Featuring Fast Thread and Data Migration and a Uniform Hardware Abstraction by Andrew “bunnie” Huang Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2002 ­c Massachusetts Institute of Technology 2002. All rights reserved. Author........................................................................... Department of Electrical Engineering and Computer Science May 24, 2002 Certifiedby....................................................................... Thomas F. Knight, Jr. Senior Research Scientist Thesis Supervisor Accepted by ...................................................................... Arthur C. Smith Chairman, Department Committee on Graduate Students 2 ADAM: A Decentralized Parallel Computer Architecture Featuring Fast Thread and Data Migration and a Uniform Hardware Abstraction by Andrew “bunnie” Huang Submitted to the Department of Electrical Engineering and Computer Science on May 24, 2002, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract The furious pace of Moore’s Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of mi- gration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading.
    [Show full text]
  • Memory and Storage Systems
    CHAPTER 3 MEMORY AND STORAGE SYSTEMS Chapter Outline Chapter Objectives 3.1 Introduction In this chapter, we will learn: 3.2 Memory Representation ∑ The concept of memory and its 3.3 Random Access Memory representation. 3.3.1 Static RAM ∑ How data is stored in Random Access 3.3.2 Dynamic RAM Memory (RAM) and the various types of 3.4 Read Only Memory RAM. 3.4.1 Programmable ROM ∑ How data is stored in Read Only Memory 3.4.2 Erasable PROM (ROM) and the various types of ROM. 3.4.3 Electrically Erasable PROM ∑ The concept of storage systems and the 3.4.4 Flash ROM various types of storage systems. 3.5 Storage Systems ∑ The criteria for evaluating storage 3.6 Magnetic Storage Systems systems. 3.6.1 Magnetic Tapes 3.6.2 Magnetic Disks 3.7 Optical Storage Systems 3.1 INTRODUCTION 3.7.1 Read only Optical Disks 3.7.2 Write Once, Read Many Disks Computers are used not only for processing of data 3.8 Magneto Optical Systems for immediate use, but also for storing of large 3.8.1 Principle used in Recording Data volume of data for future use. In order to meet 3.8.2 Architecture of Magneto Optical Disks these two specifi c requirements, computers use two 3.9 Solid-State Storage Devices types of storage locations—one, for storing the data 3.9.1 Structure of SSD that are being currently handled by the CPU and the 3.9.2 Advantages of SSD other, for storing the results and the data for future 3.9.3 Disadvantages of SSD use.
    [Show full text]
  • The Use of Write-Once Read-Many Optical Disks for Temporary and Archival Storage
    THE USE OF WRITE-ONCE READ-MANY OPTICAL DISKS FOR TEMPORARY AND ARCHIVAL STORAGE By Brenda L. Groskinsky U.S. GEOLOGICAL SURVEY Open-File Report 92-36 Portland, Oregon 1992 U. S. DEPARTMENT OF THE INTERIOR MANUEL LUJAN, JR., Secretary U.S. GEOLOGICAL SURVEY Dallas L. Peck, Director ,,... ,. , .. Copies of this report can For additional information , r , , , r .^ ^ be purchased from: wnte to: r T-V j. -^ /-u- c U.S. Geological Survey District Chief D . ° ., ^ . c .. TT _^ . , c ,Arnr^ Books and Open-File Reports Section U.S. Geological Survey, WRD c , , ^ . r oc^oc -inxn-rc^u Di T^. Federal Center, Box 25425 10615 S.E. Cherry Blossom Drive T-. , ' on~~r. n .. , ,. n^-i^ Denver, Colorado 80225 Portland, Oregon 97216 n CONTENTS Abstract..........................................................................................................................................................................1 Introduction .................................................................................................................................................................1 Purpose and scope.........................................................................................................................................3 Approach .......................................................................:..............................................................................................3 Results and discussion ................................................................................................................................................3
    [Show full text]
  • System Level Management of Hybrid Memory Systems
    UNIVERSIDAD COMPLUTENSE DE MADRID FACULTAD DE INFORMÁTICA DEPARTAMENTO DE ARQUITECTURA DE COMPUTADORES Y AUTOMÁTICA TESIS DOCTORAL System Level Management of Hybrid Memory Systems Gestión de jerarquías de memoria híbridas a nivel de sistema MEMORIA PARA OPTAR AL GRADO DE DOCTOR PRESENTADA POR Manu Perumkunnil Komalan DIRECTORES José Ignacio Gómez Pérez Christian Tomás Tenllado Francky Catthoor Madrid, 2018 © Manu Perumkunnil Komalan, 2017 ARENBERG DOCTORAL SCHOOL Faculty of Engineering Science Universidad Complutense de Madrid Facultad de Informática Departamento de Arquitectura de Computadores y Automática System Level Management of Hybrid Memory Systems Gestión de jerarquías de memoria híbridas a nivel de sistema Manu Perumkunnil Komalan Supervisors Prof. dr. ir. José Ignacio Gómez Pérez (UCM) Prof. dr. ir. Christian Tomás Tenllado (UCM) Prof. dr. ir. Francky Catthoor (KU Leuven) March 2017 System Level Management of Hybrid Memory Sys­ tems Gestión de jerarquías de memoria híbridas a nivel de sistema Manu Perumkunnil KOMALAN Examination committee: Prof. dr. ir. José Ignacio Gómez Pérez Prof. dr. ir. Christian Tomás Tenllado Prof. dr. ir. Francky Catthoor Prof. dr. ir. Wim Dehaene Prof. dr. ir. Dirk Wouters Prof. dr. ir. Manuel Prieto Matías Prof. dr. ir. Luis Piñuel Prof. dr. ir. José Manuel Colmenar March 2017 2017, UC Madrid, KU Leuven, – Manu Perumkunnil Komalan Acknowledgments There is an impossibly long list of people I want to thank for helping me in the pursuit of my PhD and making it worth much more than simple technical jargon. Like I’ve been counseled many a times and experienced, my PhD like every other PhD has followed the trajectory of a sine wave.
    [Show full text]
  • Nanodot-Based Organic Memory Devices Zhengchun Liu Louisiana Tech University
    Louisiana Tech University Louisiana Tech Digital Commons Doctoral Dissertations Graduate School Winter 2006 Nanodot-based organic memory devices Zhengchun Liu Louisiana Tech University Follow this and additional works at: https://digitalcommons.latech.edu/dissertations Part of the Electrical and Computer Engineering Commons, Materials Science and Engineering Commons, and the Organic Chemistry Commons Recommended Citation Liu, Zhengchun, "" (2006). Dissertation. 589. https://digitalcommons.latech.edu/dissertations/589 This Dissertation is brought to you for free and open access by the Graduate School at Louisiana Tech Digital Commons. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of Louisiana Tech Digital Commons. For more information, please contact [email protected]. NANODOT-BASED ORGANIC MEMORY DEVICES by Zhengchun Liu, M.S. A Dissertation Presented in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in Engineering COLLEGE OF ENGINEERING AND SCIENCE LOUISIANA TECH UNIVERSITY February 2006 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. UMI Number: 3193452 INFORMATION TO USERS The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. ® UMI UMI Microform 3193452 Copyright 2006 by ProQuest Information and Learning Company. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code.
    [Show full text]
  • Some Useful E-Records Terms
    Terms Useful in Working With E-Records: Artificial Intelligence (AI) – field of computer science that deals with the development of systems that mimic human intelligence such as problem solving Automatic indexing – the automated selection of key words from a document in order to develop index entries (also see Classifying) Backfile conversion – the process of identifying, indexing, coding, and/or inputting a large volume or backlog of documents into a recordkeeping system Backup – the process of duplicating information, primarily for protection against damage or loss (i.e. backup tapes) Bar code – a coding system of vertical lines or bars set in a predetermined pattern that, when read by an optical reader, can be converted to machine-readable language. Bit – the smallest unit of information recognized by a computer 8 bits = 1 byte 1,024 bytes = 1K (kilobyte) 1,000 K = 1MB (megabyte) (1,048,576 bytes) 1,000MB = 1GB aka 1GIG (gigabyte) (1,073,741,824 bytes) 1,000GB = 1TR (terabyte) (one trillion bytes) 1,000TR = 1PB (petabyte) (one quadrillion bytes) FYI – a terabyte or more of storage space has become possible for personal computers – cost of TR of memory in 2005, $450; down from $1,000 in 2003 Bit-mapped image file – a computer processible file that encodes images as patterns of dots - often referred to as raster graphics Classifying – the act of analyzing and determining the subject content of a document and then selecting the subject category under which it will be filed. (Some new RM software systems have the capability to “auto classify” documents based on key words.) Cold site – an unfurnished space suitable for the installation of computer and communications equipment.
    [Show full text]