Self-Healing and Secure Low-Power Memory Systems

Total Page:16

File Type:pdf, Size:1020Kb

Self-Healing and Secure Low-Power Memory Systems Self-healing and secure low-power memory systems Mădălin Neagu ADVERTIMENT La consulta d’aquesta tesi queda condicionada a l’acceptació de les següents condicions d'ús: La difusió d’aquesta tesi per mitjà del repositori institucional UPCommons (http://upcommons.upc.edu/tesis) i el repositori cooperatiu TDX ( http://www.tdx.cat/) ha estat autoritzada pels titulars dels drets de propietat intel·lectual únicament per a usos privats emmarcats en activitats d’investigació i docència. No s’autoritza la seva reproducció amb finalitats de lucre ni la seva difusió i posada a disposició des d’un lloc aliè al servei UPCommons o TDX. No s’autoritza la presentació del seu contingut en una finestra o marc aliè a UPCommons (framing). Aquesta reserva de drets afecta tant al resum de presentació de la tesi com als seus continguts. En la utilització o cita de parts de la tesi és obligat indicar el nom de la persona autora. ADVERTENCIA La consulta de esta tesis queda condicionada a la aceptación de las siguientes condiciones de uso: La difusión de esta tesis por medio del repositorio institucional UPCommons (http://upcommons.upc.edu/tesis) y el repositorio cooperativo TDR (http://www.tdx.cat/?locale- attribute=es) ha sido autorizada por los titulares de los derechos de propiedad intelectual únicamente para usos privados enmarcados en actividades de investigación y docencia. No se autoriza su reproducción con finalidades de lucro ni su difusión y puesta a disposición desde un sitio ajeno al servicio UPCommons No se autoriza la presentación de su contenido en una ventana o marco ajeno a UPCommons (framing). Esta reserva de derechos afecta tanto al resumen de presentación de la tesis como a sus contenidos. En la utilización o cita de partes de la tesis es obligado indicar el nombre de la persona autora. WARNING On having consulted this thesis you’re accepting the following use conditions: Spreading this thesis by the institutional repository UPCommons (http://upcommons.upc.edu/tesis) and the cooperative repository TDX (http://www.tdx.cat/?locale- attribute=en) has been authorized by the titular of the intellectual property rights only for private uses placed in investigation and teaching activities. Reproduction with lucrative aims is not authorized neither its spreading nor availability from a site foreign to the UPCommons service. Introducing its content in a window or frame foreign to the UPCommons service is not authorized (framing). These rights affect to the presentation summary of the thesis as well as to its contents. In the using or citation of parts of the thesis it’s obliged to indicate the name of the author. SELF-HEALING AND SECURE LOW-POWER MEMORY SYSTEMS Supervisors: Author: Joan FIGUERAS Mad˘ alin˘ NEAGU Salvador MANICH September 2017 ii SELF-HEALING AND SECURE LOW-POWER MEMORY SYSTEMS Tesi doctoral presentada per a l’obtencio´ del t´ıtol de Doctor per la Universitat Politecnica` de Catalunya, dins el Programa de Doctorat en Enginyeria Electronic` Supervisors: Author: Joan FIGUERAS Mad˘ alin˘ NEAGU Salvador MANICH Barcelona, September 2017 ii Abstract Memory systems store critical information in any digital system, thus they are sus- ceptible to transient errors and are the focus of various types of attacks. It is crucial for a memory system to keep the information as accurate as possible. There is a need to design, implement and test systems capable of handling errors by them- selves, thus, to run autonomously. Self-healing capabilities for memory systems translates into error detecting and correcting codes and replacing/replicating meth- ods of memory elements. Security and data privacy is difficult to implement in memory systems, due to the overwhelming variety of attacks. This thesis pro- poses strategies against specific attacks that can occur in memory systems. The self-healing methodology and the security solutions are evaluated from varied per- spectives: performance, area and delay overhead, and power consumption. Keywords— error detection, error correction, memory systems, data scram- bling, cache memories, side-channel attack, simple and differential power analysis iii iv Prologue The main objective of this thesis is to bring new contributions to the self-healing and secure systems domain. In particular, to develop a self-healing technique for memory systems and to increase security of memory systems, techniques which favor low-power consumption. In order to achieve the main objective, three ma- jor research objectives were proposed: design of an error detection and correction scheme for errors that occur in memory systems and integrate them in a memory system, design techniques to increase the security and data privacy of memory sys- tems against different types of attacks and to combine the previous two into a single solution, in order to achieve a self-healing and secure low-power memory system. The low-power aspect of the proposed solutions and techniques is evaluated dur- ing design stage and afterwards through simulation. Also, the architectures are evaluated from several other points of view, such as error detecting and correcting performance, area and delay overhead, and security efficiency. I want to thank all the people who have helped and supported this doctoral thesis. I would like to express my sincere gratitude to prof. Salvador Manich for introducing me to the security domain, for the continuous support of my Ph.D study and related research, for his patience, motivation and immense knowledge. His guidance helped me in all the time of research and writing of this thesis. I also want to thank prof. Joan Figueras for indicating a research direction and for stimulating me during the doctoral program and, last but not least, I want to thank prof. Liviu Miclea for the support and encouragement. I would like to thank my family, especially my wife and parents for supporting me spiritually throughout writing this thesis and my life in general. This PhD thesis has been partially sponsored by the Spanish government project TEC2013-41209-P. v vi Contents 1 Introduction 1 1.1 Context . .1 1.2 State of the art . .2 1.2.1 Motivation . .2 1.2.2 Introduction . .4 1.2.3 The Self-Healing concept . .8 1.2.4 Memory systems . .9 1.2.5 Low-power systems . 10 1.2.6 Security in memory systems . 12 1.3 Objectives . 13 1.4 Structure . 15 2 Unidirectional eDLC 17 2.1 Introduction . 17 2.2 Motivation . 18 2.3 Theoretical background . 19 2.3.1 Memory systems . 20 2.3.2 Sources of errors in SRAM and DRAM memories . 29 2.3.3 Self-healing memory systems through error detection and correction schemes . 32 2.4 Proposed solution . 46 2.4.1 Modified Berger codes . 46 2.4.2 Coding schemes . 49 2.4.3 Error localization . 52 2.4.4 Error correction . 58 2.4.5 Error escapes . 59 2.5 Implementation . 60 2.5.1 Cadence implementation . 60 vii viii CONTENTS 2.5.2 Integrating the proposed self-healing technique in memory systems . 61 2.6 Experimental results and evaluation . 68 2.6.1 Code delay . 72 2.6.2 Code redundancy . 72 2.6.3 Error localization ambiguity . 73 2.6.4 Error correction . 73 2.6.5 Error escapes . 74 2.6.6 Area of the code generator and memory resources . 75 2.6.7 Power consumption . 77 2.6.8 Delay . 77 2.6.9 Overall evaluation . 80 2.7 Conclusions . 82 3 Security in cache memories (IST) 85 3.1 Introduction . 85 3.2 Theoretical background . 86 3.3 Data scrambling . 87 3.4 Statement of the problem . 92 3.5 Proposed solution: Interleaved Scrambling Technique (IST) . 93 3.5.1 Scrambler Table . 94 3.5.2 Cache Memory . 100 3.5.3 Read and write cycles . 101 3.6 IST performance and efficiency . 104 3.6.1 Time performance . 104 3.6.2 Power efficiency . 105 3.7 Evaluation and experimental results . 108 3.7.1 CACTI tool evaluation . 108 3.7.2 FPGA model evaluation . 112 3.8 Conclusions . 114 4 Defeating SPEMA and DPEMA 115 4.1 Introduction . 115 4.2 Motivation . 116 4.3 Theoretical background . 116 4.3.1 Attacks on memory . 116 4.3.2 Cold-boot attacks . 117 4.4 Securing memory at hardware level . 119 4.4.1 Main memory . 119 4.4.2 Cache memory . 120 CONTENTS ix 4.4.3 Interleaved Scrambling Technique . 121 4.5 Power (P) and Electromagnetic (EM) Radiation Analysis . 122 4.5.1 Simple P or EM Radiation Analysis Attack . 124 4.5.2 Differential P or EM Radiation Analysis Attack . 125 4.5.3 Attack model . 126 4.6 Statement of the Problem . 127 4.6.1 Objective . 129 4.7 Proposed solution for defeating SPEMA . 129 4.7.1 eDLC review and integration with IST . 129 4.7.2 Scrambling vector redundancy filter . 132 4.8 Proposed solution for defeating DPEMA . 136 4.8.1 Example of DPEMA attack on ISTe . 137 4.8.2 DPEMA countermeasure . 140 4.8.3 How it works . 142 4.9 Evaluation and experimental results . 144 4.9.1 Leakage function . 144 4.9.2 Results . 147 4.9.3 Implementation costs . 153 4.10 Conclusions . 161 5 Conclusions 163 5.1 Scientific contributions . 166 5.2 Future research and developments . 170 Bibliography 171 x CONTENTS List of Figures 1.1 Conceptual model of the autonomic system [9] . .5 1.2 SOC Consumer Portable Design Complexity Trends [1] . .7 1.3 SOC Consumer Portable Power Consumption Trends [1] . .7 1.4 Common construction of a memory hierarchy [13] . 10 1.5 Components of a cache memory [14] . 11 2.1 The triangle of balance for EDCs and ECCs. 20 2.2 Basic 6T SRAM cell.
Recommended publications
  • Increasing Reliability and Fault Tolerance of a Secure Distributed Cloud Storage
    Increasing reliability and fault tolerance of a secure distributed cloud storage Nikolay Kucherov1;y, Mikhail Babenko1;yy, Andrei Tchernykh2;z, Viktor Kuchukov1;zz and Irina Vashchenko1;yz 1 North-Caucasus Federal University,Stavropol,Russia 2 CICESE Research Center,Ensenada,Mexico E-mail: [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. The work develops the architecture of a multi-cloud data storage system based on the principles of modular arithmetic. This modification of the data storage system allows increasing reliability of data storage and fault tolerance of the cloud system. To increase fault- tolerance, adaptive data redistribution between available servers is applied. This is possible thanks to the introduction of additional redundancy. This model allows you to restore stored data in case of failure of one or more cloud servers. It is shown how the proposed scheme will enable you to set up reliability, redundancy, and reduce overhead costs for data storage by adapting the parameters of the residual number system. 1. Introduction Currently, cloud services, Google, Amazon, Dropbox, Microsoft OneDrive, providing cloud storage, and data processing services, are gaining high popularity. The main reason for using cloud products is the convenience and accessibility of the services offered. Thanks to the use of cloud technologies, it is possible to save financial costs for maintaining and maintaining servers for storing and securing information. All problems arising during the storage and processing of data are transferred to the cloud provider [1]. Distributed infrastructure represents the conditions in which competition for resources between high-priority computing tasks of data analysis occurs regularly [2].
    [Show full text]
  • And Intra-Set Write Variations
    i2WAP: Improving Non-Volatile Cache Lifetime by Reducing Inter- and Intra-Set Write Variations 1 2 1,3 4 Jue Wang , Xiangyu Dong ,YuanXie , Norman P. Jouppi 1Pennsylvania State University, 2Qualcomm Technology, Inc., 3AMD Research, 4Hewlett-Packard Labs 1{jzw175,yuanxie}@cse.psu.edu, [email protected], [email protected], [email protected] Abstract standby power, better scalability, and non-volatility. For example, among many ReRAM prototype demonstrations Modern computers require large on-chip caches, but [11, 14, 21–25], a 4Mb ReRAM macro [21] can achieve a the scalability of traditional SRAM and eDRAM caches is cell size of 9.5F 2 (15X denser than SRAM) and a random constrained by leakage and cell density. Emerging non- read/write latency of 7.2ns (comparable to SRAM caches volatile memory (NVM) is a promising alternative to build with similar capacity). Although its write access energy is large on-chip caches. However, limited write endurance is usually on the order of 10pJ per bit (10X of SRAM write a common problem for non-volatile memory technologies. energy, 5X of DRAM), the actual energy saving comes from In addition, today’s cache management might result in its non-volatility property. Non-volatility can eliminate the unbalanced write traffic to cache blocks causing heavily- standby leakage energy, which can be as high as 80% of the written cache blocks to fail much earlier than others. total energy consumption for an SRAM L2 cache [10]. Unfortunately, existing wear-leveling techniques for NVM- Given the potential energy and cost saving opportunities based main memories cannot be simply applied to NVM- (via reducing cell size) from the adoption of non-volatile based on-chip caches because cache writes have intra-set technologies, replacing SRAM and eDRAM with them can variations as well as inter-set variations.
    [Show full text]
  • Improving Performance and Reliability of Flash Memory Based Solid State Storage Systems
    Improving Performance And Reliability Of Flash Memory Based Solid State Storage Systems A dissertation submitted to the Graduate School of the University of Cincinnati in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science and Engineering in the Department of Electrical Engineering and Computing Systems of the College of Engineering and Applied Science by Mingyang Wang 2015 B.E. University of Science and Technology Beijing, China Committee: Professor Yiming Hu, Chair Professor Kenneth Berman Professor Karen Davis Professor Wen-Ben Jone Professor Carla Purdy Abstract Flash memory based Solid State Disk systems (SSDs) are becoming increasingly popular in enterprise applications where high performance and high reliability are paramount. While SSDs outperform traditional Hard Disk Drives (HDDs) in read and write operations, they pose some unique and serious challenges to I/O and file system designers. The performance of an SSD has been found to be sensitive to access patterns. Specifically read operations perform much faster than write ones, and sequential accesses deliver much higher performance than random accesses. The unique properties of SSDs, together with the asymmetric overheads of different operations, imply that many traditional solutions tailored for HDDs may not work well for SSDs. The close relation between performance overhead and access patterns motivates us to design a series of novel algorithms for I/O scheduler and buffer cache management. By exploiting refined access patterns such as sequential, page clustering, block clustering in a per-process per- file manner, a series of innovative algorithms on I/O scheduler and buffer cache can deliver higher performance of the file system and SSD devices.
    [Show full text]
  • Configuring Oracle Solaris ZFS for an Oracle Database
    An Oracle White Paper May 2010 Configuring Oracle® Solaris ZFS for an Oracle Database Oracle White Paper—Configuring ZFS for an Oracle Database Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle. Oracle White Paper—Configuring ZFS for an Oracle Database Configuring Oracle Solaris ZFS for an Oracle Database 1 Overview 1 Planning Oracle Solaris ZFS for Oracle Database Configuration 2 Configuring Oracle Solaris ZFS File Systems for an Oracle Database 5 Configuring Your Oracle Database on Oracle Solaris ZFS File Systems 10 Maintaining Oracle Solaris ZFS File Systems for an Oracle Database 11 Oracle White Paper—Configuring ZFS for an Oracle Database Configuring Oracle Solaris ZFS for an Oracle Database This document provides planning information and step-by-step instructions for configuring and maintaining Oracle Solaris ZFS file systems for an Oracle database. • Planning Oracle Solaris ZFS for Oracle Database Configuration • Configuring Oracle Solaris ZFS File Systems for an Oracle Database • Configuring Your Oracle Database on Oracle Solaris ZFS File Systems • Maintaining Oracle Solaris ZFS File Systems for an Oracle Database Overview A large number of enterprise customers today run an Oracle relational database on top of storage arrays with different sophistication levels and many customers would like to take advantage of the Oracle Solaris ZFS file system.
    [Show full text]
  • PC-Based Data Acquisition II Data Streaming to a Storage Device
    FYS3240 PC-based instrumentation and microcontrollers PC-based data acquisition II Data streaming to a storage device Spring 2016 – Lecture 9 Bekkeng, 20.01.2016 Data streaming • Data written to or read from a storage device at a sustained rate is often referred to as streaming • Trends in data storage – Ever-increasing amounts of data (Big Data) – Record “everything” and play it back later – Hard drives: faster, bigger, and cheaper – Solid state drives – RAID hardware – PCI Express • PCI Express provides higher, dedicated bandwidth Applications requiring high-speed data streaming (examples) • High speed data acquisition – Combined with many DAQ channels • Radar (Giga-samples/s) – RF recording and playback • High resolution and/or high speed video – Digital video recording and playback Key system components for high- speed streaming • Hardware Platform with High-Throughput and Low- Latency – PXI/PCI Express bus • High-Speed Data Storage – Hard Disk Drives (HDDs) in RAID – Solid-State Drives (SSDs) • Software for Streaming to Disk at High Rates – Parallel programming – Fast binary file format 32-Bit vs. 64-Bit OS for DAQ applications • A 32-bit processor can reference 232 bytes, or 4 GB of memory • A 64-bit processor are theoretically capable of referencing 264 locations in memory. However, all 64-bit versions of Microsoft operating systems currently impose a 16 TB limit on address space • Note: 64-bit versions of operating systems and application programs are note necessarily faster than their 32-bit counterparts. They can be slower! • The main benefits of 64-bits for DAQ applications is the large amounts of RAM possible. • Other applications that can benefit from 64-bits are those working with very large numbers.
    [Show full text]
  • (12) United States Patent (10) Patent No.: US 7,191,286 B2 Forrer, Jr
    US007191286B2 (12) United States Patent (10) Patent No.: US 7,191,286 B2 Forrer, Jr. et al. (45) Date of Patent: Mar. 13, 2007 (54) DATA REDUNDANCY IN INDIVIDUAL HARD (56) References Cited DRIVES U.S. PATENT DOCUMENTS (75) Inventors: Thomas R. Forrer, Jr., Round Rock, 6,028,725 A * 2/2000 Blumenau ...................... 360/8 TX (US); Jason Eric Moore, Austin, 2004/0179386 A1* 9/2004 Jun ............................ 365,145 TX (US); Abel Enrique Zuzuarregui, Paige, TX (US) * cited by examiner (73) Assignee: International Business Machines Primary Examiner Reginald Bragdon Corporation, Armonk, NY (US) Assistant Examiner Shawn Gu (74) Attorney, Agent, or Firm—Duke W. Yee; Diana R. (*) Notice: Subject to any disclaimer, the term of this Gerhardt; James O. Skarsten patent is extended or adjusted under 35 U.S.C. 154(b) by 366 days. (57) ABSTRACT (21) Appl. No.: 10/809,589 A method and system are disclosed for recovering lost data with redundancy in an individual hard drive. A mirroring (22) Filed: Mar. 25, 2004 type process is used in a single hard drive to maintain a backup copy of all data stored on that hard drive. The hard (65) Prior Publication Data drive maintains two copies of the stored data on different US 2005/0216657 A1 Sep. 29, 2005 storage media segments or partitions defined by separate read/write heads. In the event that a hard drive media defect (51) Int. C. or error occurs with respect to any of the data stored on a F6F 12/00 (2006.01) segment defined by one head, that data can be restored from (52) U.S.
    [Show full text]
  • Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15
    SUSE Linux Enterprise Server 15 Storage Administration Guide Storage Administration Guide SUSE Linux Enterprise Server 15 Provides information about how to manage storage devices on a SUSE Linux Enterprise Server. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xi 1 Available Documentation xi 2 Giving Feedback xiii 3 Documentation Conventions xiii 4 Product Life Cycle and Support xv Support Statement for SUSE Linux Enterprise Server xvi • Technology Previews xvii I FILE SYSTEMS AND MOUNTING 1 1 Overview of File Systems
    [Show full text]
  • Of File Systems and Storage Models
    Chapter 4 Of File Systems and Storage Models Disks are always full. It is futile to try to get more disk space. Data expands to fill any void. –Parkinson’sLawasappliedto disks 4.1 Introduction This chapter deals primarily with how we store data. Virtually all computer systems require some way to store data permanently; even so-called “diskless” systems do require access to certain files in order to boot, run and be useful. Albeit stored remotely (or in memory), these bits reside on some sort of storage system. Most frequently, data is stored on local hard disks, but over the last few years more and more of our files have moved “into the cloud”, where di↵erent providers o↵er easy access to large amounts of storage over the network. We have more and more computers depending on access to remote systems, shifting our traditional view of what constitutes a storage device. 74 CHAPTER 4. OF FILE SYSTEMS AND STORAGE MODELS 75 As system administrators, we are responsible for all kinds of devices: we build systems running entirely without local storage just as we maintain the massive enterprise storage arrays that enable decentralized data replication and archival. We manage large numbers of computers with their own hard drives, using a variety of technologies to maximize throughput before the data even gets onto a network. In order to be able to optimize our systems on this level, it is important for us to understand the principal concepts of how data is stored, the di↵erent storage models and disk interfaces.Itisimportanttobeawareofcertain physical properties of our storage media, and the impact they, as well as certain historic limitations, have on how we utilize disks.
    [Show full text]
  • Reading Bits in Memory Without Accessing Them
    RAMBleed: Reading Bits in Memory Without Accessing Them Andrew Kwong Daniel Genkin Daniel Gruss Yuval Yarom University of Michigan University of Michigan Graz University of Technology University of Adelaide and Data61 [email protected] [email protected] [email protected] [email protected] Abstract—The Rowhammer bug is a reliability issue in DRAM directly access the changed memory location, the change is cells that can enable an unprivileged adversary to flip the values not visible to the processor or the operating system, and is of bits in neighboring rows on the memory module. Previous not subject to any permission checks. Thus far, this ability to work has exploited this for various types of fault attacks across security boundaries, where the attacker flips inaccessible bits, reliably flip bits across security boundaries has been exploited often resulting in privilege escalation. It is widely assumed for sandbox escapes [21, 57], privilege escalation attacks on however, that bit flips within the adversary’s own private memory operating systems and hypervisors [21, 23, 53, 57, 63, 66], have no security implications, as the attacker can already modify denial-of-service attacks [23, 30], and even for fault injection its private memory via regular write operations. in cryptographic protocols [6]. We demonstrate that this assumption is incorrect by employing Rowhammer as a read side channel. More specifically, we show A common theme for all past Rowhammer attacks is how an unprivileged attacker can exploit the data dependence that they break memory integrity. Namely, the attacker uses between Rowhammer-induced bit flips and the bits in nearby Rowhammer to obtain a (limited) write primitive into oth- rows to deduce these bits, including values belonging to other erwise inaccessible memory, and subsequently modifies the processes.
    [Show full text]
  • Data Stored Based Test Pattern Generation in Flash Memory
    Special Issue - 2018 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 ICONNECT - 2k18 Conference Proceedings Data Stored Based Test Pattern Generation in Flash Memory Priyadharshini M, Chelladurai T, M.E Vlsi design, PSNACET, Asst. Professor (ECE), PSNACET, Dindigul, India, Dindigul, India, Abstract-The Flash Memory built with floating gate transistor testing of the given input data stored in the memory. The main are easily affected by device noises that leads to Bit Error idea of the is to provide, Rate(BER). These noises heavily depend on the stored data pattern. Data Pattern Aware(DPA) is proposed to manipulate • The given input data is modified and stored using DPA. the ratio of 0’s and 1’s in stored data to reduce the probability of Data Patterns which are easily affected by device noises. A test • A part of DPA function known DPA-PPU (Pattern pattern is proposed to monitor the input vectors for concurrent Probability Unbalance) modifies the pattern of the given testing of NAND flash memory. Only required test pattern are data from the lowest number 1’s to its maximum. DPA- picked out from the entire sequence of generated test pattern. DRM (Data Redundancy Management) manages the These sequences are selected based on the data stored in redundancy caused by the process during the memory. A response vector provides a comparison and determines that the memory contains tested data for the given modification process. input. • Test pattern generated for the data pattern to be stored in memory and verified each time after every reset. Keyword: NAND flash memory, Data pattern and Test pattern generation.
    [Show full text]
  • Opportunistic Memory Systems in Presence of Hardware Variability
    UCLA UCLA Electronic Theses and Dissertations Title Opportunistic Memory Systems in Presence of Hardware Variability Permalink https://escholarship.org/uc/item/85c1r1q9 Author Gottscho, Mark William Publication Date 2017 Peer reviewed|Thesis/dissertation eScholarship.org Powered by the California Digital Library University of California UNIVERSITY OF CALIFORNIA Los Angeles Opportunistic Memory Systems in Presence of Hardware Variability A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Electrical Engineering by Mark William Gottscho 2017 c Copyright by Mark William Gottscho 2017 ABSTRACT OF THE DISSERTATION Opportunistic Memory Systems in Presence of Hardware Variability by Mark William Gottscho Doctor of Philosophy in Electrical Engineering University of California, Los Angeles, 2017 Professor Puneet Gupta, Chair The memory system presents many problems in computer architecture and system design. An important challenge is worsening hardware variability that is caused by nanometer-scale manufac- turing difficulties. Variability particularly affects memory circuits and systems – which are essen- tial in all aspects of computing – by degrading their reliability and energy efficiency. To address this challenge, this dissertation proposes Opportunistic Memory Systems in Presence of Hardware Variability. It describes a suite of techniques to opportunistically exploit memory variability for energy savings and cope with memory errors when they inevitably occur. In Part 1, three complementary projects are described that exploit memory variability for im- proved energy efficiency. First, ViPZonE and DPCS study how to save energy in off-chip DRAM main memory and on-chip SRAM caches, respectively, without significantly impacting perfor- mance or manufacturing cost. ViPZonE is a novel extension to the virtual memory subsystem in Linux that leverages power variation-aware physical address zoning for energy savings.
    [Show full text]
  • Rambleed: Reading Bits in Memory Without Accessing Them
    RAMBleed: Reading Bits in Memory Without Accessing Them Andrew Kwong Daniel Genkin Daniel Gruss Yuval Yarom University of Michigan University of Michigan Graz University of Technology University of Adelaide and Data61 [email protected] [email protected] [email protected] [email protected] Abstract—The Rowhammer bug is a reliability issue in DRAM directly access the changed memory location, the change is cells that can enable an unprivileged adversary to flip the values not visible to the processor or the operating system, and is of bits in neighboring rows on the memory module. Previous not subject to any permission checks. Thus far, this ability to work has exploited this for various types of fault attacks across security boundaries, where the attacker flips inaccessible bits, reliably flip bits across security boundaries has been exploited often resulting in privilege escalation. It is widely assumed for sandbox escapes [19, 55], privilege escalation attacks on however, that bit flips within the adversary’s own private memory operating systems and hypervisors [19, 21, 51, 55, 61, 64], have no security implications, as the attacker can already modify denial-of-service attacks [21, 28], and even for fault injection its private memory via regular write operations. in cryptographic protocols [6]. We demonstrate that this assumption is incorrect, by em- ploying Rowhammer as a read side channel. More specifically, A common theme for all past Rowhammer attacks is we show how an unprivileged attacker can exploit the data that they break memory integrity. Namely, the attacker uses dependence between Rowhammer induced bit flips and the bits Rowhammer to obtain a (limited) write primitive into oth- in nearby rows to deduce these bits, including values belonging to erwise inaccessible memory, and subsequently modifies the other processes and the kernel.
    [Show full text]