AN2014 Application Note How a Designer Can Make the Most of Stmicroelectronics Serial Eeproms

Total Page:16

File Type:pdf, Size:1020Kb

AN2014 Application Note How a Designer Can Make the Most of Stmicroelectronics Serial Eeproms AN2014 Application note How a designer can make the most of STMicroelectronics serial EEPROMs Introduction Electrically Erasable and PROgrammable Memory (EEPROM) devices are standard products used for the non-volatile storage of data parameters, with a fine-granularity. This application note describes most of the internal architecture and related functionality of the STMicroelectronics EEPROM, such as the storage mechanism, interface circuits, optimal settings of hardware, software and data management. With these guidelines, an application designer gains a better understanding of the device’s operation and can profit from these recommendations to significantly improve the reliability of the application. February 2015 DocID10701 Rev 10 1/65 www.st.com AN2014 Contents Contents 1 EEPROM cell and memory array architecture . 7 1.1 Floating gate operation within an EEPROM cell . 7 1.1.1 Reading the value stored in a memory cell . 9 1.1.2 Writing a new value to the memory cell . 9 1.1.3 Cycling limit of EEPROM cells . 12 1.2 Electrical architecture of ST serial EEPROM arrays . 13 1.2.1 Memory array architecture . 13 1.2.2 Decoding architecture . 14 1.2.3 Intrinsic electrical stress induced by programming . 14 2 Choosing a suitable EEPROM for your application . 16 2.1 Choosing a memory type suited to the task to be performed . 16 2.2 Choosing an appropriate memory interface . 16 2.3 Choosing an appropriate supply voltage and temperature range . 17 3 Recommendations to improve EEPROM reliability . 18 3.1 Electrostatic discharges (ESD) . 18 3.1.1 What is ESD? . 18 3.1.2 How to prevent ESD? . 18 3.1.3 ST EEPROM ESD protection . 18 3.2 Electrical overstress and latchup . 19 3.2.1 What are EOS and latchup? . 19 3.2.2 How to prevent EOS and latchup events . 19 3.2.3 ST EEPROM latchup protection . 20 3.3 Power supply considerations . 20 3.3.1 Power-up and power-on-reset sequence . 21 3.3.2 Stabilized power supply voltage . 22 3.3.3 Absolute maximum ratings . 22 4 Hardware considerations . 23 4.1 I2C family (M24xxx devices) . 23 4.1.1 Chip enable (E0, E1, E2) . 23 4.1.2 Serial data (SDA) . 24 4.1.3 Serial clock (SCL) . 25 DocID10701 Rev 10 2/65 4 Contents AN2014 4.1.4 Write control (WC) . 26 4.1.5 Recommended I2C EEPROM connections . 27 4.2 SPI family (M95xxx devices) . 29 4.2.1 Chip Select (S) . 29 4.2.2 Write Protect (W) . 29 4.2.3 Serial Data input (D) and Serial Clock (C) . 30 4.2.4 Hold (HOLD) . 30 4.2.5 Serial Data output (Q) . 31 4.2.6 Recommended SPI EEPROM connections . 31 4.3 MICROWIRE® family (M93Cxxx and M93Sxxx devices) . 33 4.3.1 Chip Select (S) . 33 4.3.2 Serial Data (D) and Serial Clock (C) . 33 4.3.3 Organization Select (ORG) . 34 4.3.4 Serial Data output (Q) . 34 4.3.5 Don’t use (DU) . 34 4.3.6 Recommended MICROWIRE EEPROM connections . 35 4.4 PCB Layout considerations . 36 4.4.1 Cross coupling . 36 4.4.2 Noise and disturbances on power supply lines . 36 5 Software considerations . 37 5.1 EEPROM electrical parameters . 37 5.2 Optimal Write control . 37 5.2.1 Page mode . 37 5.2.2 Data polling . 38 5.3 Write protection . 41 5.3.1 Software write protection . 41 5.3.2 Hardware write protection . 42 5.4 Data integrity . 43 5.4.1 The checksum . 43 5.4.2 Data redundancy . 43 5.4.3 Checksum and data redundancy . 44 5.4.4 Extra redundancy . 44 5.5 Cycling endurance and data retention . 45 5.5.1 Cycling and data retention qualification procedures . 45 5.5.2 Optimal cycling with ECC . 45 3/65 DocID10701 Rev 10 AN2014 Contents 5.5.3 Cycling and temperature dependence . 46 5.5.4 Defining the application cycling strategy . 47 5.5.5 Overall number of write cycles . 48 6 Power supply loss and application reset . 49 6.1 Application reset . 49 6.1.1 I2C family . 49 6.1.2 SPI family . 51 6.1.3 MICROWIRE family . 52 6.2 Power supply loss . 53 6.2.1 Hardware recommendations . 53 6.2.2 Supply voltage energy tank capacitor . 53 6.2.3 Interruption of an EEPROM request . 54 6.3 Robust software and default operating mode . 58 7 Operating conditions . 59 7.1 Temperature . 59 7.2 Humidity and chemical vapors . 59 7.3 Mechanical stress . 59 8 Conclusions . 60 9 References . 61 10 Revision history . 62 DocID10701 Rev 10 4/65 4 AN2014 List of tables List of tables Table 1. Three serial bus protocols . 17 Table 2. ESD generation . 18 Table 3. Typical POR threshold values . ..
Recommended publications
  • Hybrid Drowsy SRAM and STT-RAM Buffer Designs for Dark-Silicon
    This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS 1 Hybrid Drowsy SRAM and STT-RAM Buffer Designs for Dark-Silicon-Aware NoC Jia Zhan, Student Member, IEEE, Jin Ouyang, Member, IEEE,FenGe,Member, IEEE, Jishen Zhao, Member, IEEE, and Yuan Xie, Fellow, IEEE Abstract— The breakdown of Dennard scaling prevents us MCMC MC from powering all transistors simultaneously, leaving a large fraction of dark silicon. This crisis has led to innovative work on Tile link power-efficient core and memory architecture designs. However, link link RouterRouter the research for addressing dark silicon challenges with network- To router on-chip (NoC), which is a major contributor to the total chip NI power consumption, is largely unexplored. In this paper, we Core link comprehensively examine the network power consumers and L1-I$ L2 L1-D$ the drawbacks of the conventional power-gating techniques. To overcome the dark silicon issue from the NoC’s perspective, we MCMC MC propose DimNoC, a dim silicon scheme, which leverages recent drowsy SRAM design and spin-transfer torque RAM (STT-RAM) Fig. 1. 4 × 4 NoC-based multicore architecture. In each node, the local technology to replace pure SRAM-based NoC buffers. processing elements (core, L1, L2 caches, and so on) are attached to a router In particular, we propose two novel hybrid buffer architectures: through an NI. Routers are interconnected through links to form an NoC. 1) a hierarchical buffer architecture, which divides the input Four memory controllers are attached at the four corners, which will be used buffers into a set of levels with different power states and 2) a for off-chip memory access.
    [Show full text]
  • Z/OS ICSF Overview How to Send Your Comments to IBM
    z/OS Version 2 Release 3 Cryptographic Services Integrated Cryptographic Service Facility Overview IBM SC14-7505-08 Note Before using this information and the product it supports, read the information in “Notices” on page 81. This edition applies to ICSF FMID HCR77D0 and Version 2 Release 3 of z/OS (5650-ZOS) and to all subsequent releases and modifications until otherwise indicated in new editions. Last updated: 2020-05-25 © Copyright International Business Machines Corporation 1996, 2020. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Figures................................................................................................................ vii Tables.................................................................................................................. ix About this information.......................................................................................... xi ICSF features...............................................................................................................................................xi Who should use this information................................................................................................................ xi How to use this information........................................................................................................................ xi Where to find more information.................................................................................................................xii
    [Show full text]
  • An Analysis of Data Corruption in the Storage Stack
    An Analysis of Data Corruption in the Storage Stack Lakshmi N. Bairavasundaram∗, Garth R. Goodson†, Bianca Schroeder‡ Andrea C. Arpaci-Dusseau∗, Remzi H. Arpaci-Dusseau∗ ∗University of Wisconsin-Madison †Network Appliance, Inc. ‡University of Toronto {laksh, dusseau, remzi}@cs.wisc.edu, [email protected], [email protected] Abstract latent sector errors, within disk drives [18]. Latent sector errors are detected by a drive’s internal error-correcting An important threat to reliable storage of data is silent codes (ECC) and are reported to the storage system. data corruption. In order to develop suitable protection Less well-known, however, is that current hard drives mechanisms against data corruption, it is essential to un- and controllers consist of hundreds-of-thousandsof lines derstand its characteristics. In this paper, we present the of low-level firmware code. This firmware code, along first large-scale study of data corruption. We analyze cor- with higher-level system software, has the potential for ruption instances recorded in production storage systems harboring bugs that can cause a more insidious type of containing a total of 1.53 million disk drives, over a pe- disk error – silent data corruption, where the data is riod of 41 months. We study three classes of corruption: silently corrupted with no indication from the drive that checksum mismatches, identity discrepancies, and par- an error has occurred. ity inconsistencies. We focus on checksum mismatches since they occur the most. Silent data corruptionscould lead to data loss more of- We find more than 400,000 instances of checksum ten than latent sector errors, since, unlike latent sector er- mismatches over the 41-month period.
    [Show full text]
  • Detection Method of Data Integrity in Network Storage Based on Symmetrical Difference
    S S symmetry Article Detection Method of Data Integrity in Network Storage Based on Symmetrical Difference Xiaona Ding School of Electronics and Information Engineering, Sias University of Zhengzhou, Xinzheng 451150, China; [email protected] Received: 15 November 2019; Accepted: 26 December 2019; Published: 3 February 2020 Abstract: In order to enhance the recall and the precision performance of data integrity detection, a method to detect the network storage data integrity based on symmetric difference was proposed. Through the complete automatic image annotation system, the crawler technology was used to capture the image and related text information. According to the automatic word segmentation, pos tagging and Chinese word segmentation, the feature analysis of text data was achieved. Based on the symmetrical difference algorithm and the background subtraction, the feature extraction of image data was realized. On the basis of data collection and feature extraction, the sentry data segment was introduced, and then the sentry data segment was randomly selected to detect the data integrity. Combined with the accountability scheme of data security of the trusted third party, the trusted third party was taken as the core. The online state judgment was made for each user operation. Meanwhile, credentials that cannot be denied by both parties were generated, and thus to prevent the verifier from providing false validation results. Experimental results prove that the proposed method has high precision rate, high recall rate, and strong reliability. Keywords: symmetric difference; network; data integrity; detection 1. Introduction In recent years, the cloud computing becomes a new shared infrastructure based on the network. Based on Internet, virtualization, and other technologies, a large number of system pools and other resources are combined to provide users with a series of convenient services [1].
    [Show full text]
  • Linux Data Integrity Extensions
    Linux Data Integrity Extensions Martin K. Petersen Oracle [email protected] Abstract The software stack, however, is rapidly growing in com- plexity. This implies an increasing failure potential: Many databases and filesystems feature checksums on Harddrive firmware, RAID controller firmware, host their logical blocks, enabling detection of corrupted adapter firmware, operating system code, system li- data. The scenario most people are familiar with in- braries, and application errors. There are many things volves bad sectors which develop while data is stored that can go wrong from the time data is generated in on disk. However, many corruptions are actually a re- host memory until it is stored physically on disk. sult of errors that occurred when the data was originally written. While a database or filesystem can detect the Most storage devices feature extensive checking to pre- corruption when data is eventually read back, the good vent errors. However, these protective measures are al- data may have been lost forever. most exclusively being deployed internally to the de- vice in a proprietary fashion. So far, there have been A recent addition to SCSI allows extra protection infor- no means for collaboration between the layers in the I/O mation to be exchanged between controller and disk. We stack to ensure data integrity. have extended this capability up into Linux, allowing filesystems (and eventually applications) to be able to at- An extension to the SCSI family of protocols tries to tach integrity metadata to I/O requests. Controllers and remedy this by defining a way to check the integrity of disks can then verify the integrity of an I/O before com- an request as it traverses the I/O stack.
    [Show full text]
  • What Is Cloud-Based Backup and Recovery?
    White paper Cover Image Pick an image that is accurate and relevant to the content, in a glance, the image should be able to tell the story of the asset 550x450 What is cloud-based backup and recovery? Q120-20057 Executive Summary Companies struggle with the challenges of effective backup and recovery. Small businesses lack dedicated IT resources to achieve and manage a comprehensive data protection platform, and enterprise firms often lack the budget and resources for implementing truly comprehensive data protection. Cloud-based backup and recovery lets companies lower their Cloud-based backup and recovery lets data protection cost or expand their capabilities without raising costs or administrative overhead. Firms of all sizes can companies lower their data protection benefit from cloud-based backup and recovery by eliminating cost or expand their capabilities without on-premises hardware and software infrastructure for data raising costs or administrative overhead.” protection, and simplifying their backup administration, making it something every company should consider. Partial or total cloud backup is a good fit for most companies given not only its cost-effectiveness, but also its utility. Many cloud- based backup vendors offer continuous snapshots of virtual machines, applications, and changed data. Some offer recovery capabilities for business-critical applications such as Microsoft Office 365. Others also offer data management features such as analytics, eDiscovery and regulatory compliance. This report describes the history of cloud-based backup and recovery, its features and capabilities, and recommendations for companies considering a cloud-based data protection solution. What does “backup and recovery” mean? There’s a difference between “backup and recovery” and “disaster recovery.” Backup and recovery refers to automated, regular file storage that enables data recovery and restoration following a loss.
    [Show full text]
  • Understanding Real World Data Corruptions in Cloud Systems
    Understanding Real World Data Corruptions in Cloud Systems Peipei Wang, Daniel J. Dean, Xiaohui Gu Department of Computer Science North Carolina State University Raleigh, North Carolina {pwang7,djdean2}@ncsu.edu, [email protected] Abstract—Big data processing is one of the killer applications enough to require attention, little research has been done to for cloud systems. MapReduce systems such as Hadoop are the understand software-induced data corruption problems. most popular big data processing platforms used in the cloud system. Data corruption is one of the most critical problems in In this paper, we present a comprehensive study on the cloud data processing, which not only has serious impact on characteristics of the real world data corruption problems the integrity of individual application results but also affects the caused by software bugs in cloud systems. We examined 138 performance and availability of the whole data processing system. data corruption incidents reported in the bug repositories of In this paper, we present a comprehensive study on 138 real world four Hadoop projects (i.e., Hadoop-common, HDFS, MapRe- data corruption incidents reported in Hadoop bug repositories. duce, YARN [17]). Although Hadoop provides fault tolerance, We characterize those data corruption problems in four aspects: our study has shown that data corruptions still seriously affect 1) what impact can data corruption have on the application and system? 2) how is data corruption detected? 3) what are the the integrity, performance, and availability
    [Show full text]
  • Nasdeluxe Z-Series
    NASdeluxe Z-Series Benefit from scalable ZFS data storage By partnering with Starline and with Starline Computer’s NASdeluxe Open-E, you receive highly efficient Z-series and Open-E JovianDSS. This and reliable storage solutions that software-defined storage solution is offer: Enhanced Storage Performance well-suited for a wide range of applica- tions. It caters perfectly to the needs • Great adaptability Tiered RAM and SSD cache of enterprises that are looking to de- • Tiered and all-flash storage Data integrity check ploy a flexible storage configuration systems which can be expanded to a high avail- Data compression and in-line • High IOPS through RAM and SSD ability cluster. Starline and Open-E can data deduplication caching look back on a strategic partnership of Thin provisioning and unlimited • Superb expandability with more than 10 years. As the first part- number of snapshots and clones ner with a Gold partnership level, Star- Starline’s high-density JBODs – line has always been working hand in without downtime Simplified management hand with Open-E to develop and de- Flexible scalability liver innovative data storage solutions. Starline’s NASdeluxe Z-Series offers In fact, Starline supports worldwide not only great features, but also great Hardware independence enterprises in managing and pro- flexibility – thanks to its modular archi- tecting their storage, with over 2,800 tecture. Open-E installations to date. www.starline.de Z-Series But even with a standard configuration with nearline HDDs IOPS and SSDs for caching, you will be able to achieve high IOPS 250 000 at a reasonable cost.
    [Show full text]
  • MRAM Technology Status
    National Aeronautics and Space Administration MRAM Technology Status Jason Heidecker Jet Propulsion Laboratory Pasadena, California Jet Propulsion Laboratory California Institute of Technology Pasadena, California JPL Publication 13-3 2/13 National Aeronautics and Space Administration MRAM Technology Status NASA Electronic Parts and Packaging (NEPP) Program Office of Safety and Mission Assurance Jason Heidecker Jet Propulsion Laboratory Pasadena, California NASA WBS: 104593 JPL Project Number: 104593 Task Number: 40.49.01.09 Jet Propulsion Laboratory 4800 Oak Grove Drive Pasadena, CA 91109 http://nepp.nasa.gov i This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, and was sponsored by the National Aeronautics and Space Administration Electronic Parts and Packaging (NEPP) Program. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not constitute or imply its endorsement by the United States Government or the Jet Propulsion Laboratory, California Institute of Technology. ©2013. California Institute of Technology. Government sponsorship acknowledged. ii TABLE OF CONTENTS 1.0 Introduction ............................................................................................................................................................ 1 2.0 MRAM Technology ................................................................................................................................................ 2 2.1
    [Show full text]
  • The Effects of Repeated Refresh Cycles on the Oxide Integrity of EEPROM Memories at High Temperature by Lynn Reed and Vema Reddy Tekmos, Inc
    The Effects of Repeated Refresh Cycles on the Oxide Integrity of EEPROM Memories at High Temperature By Lynn Reed and Vema Reddy Tekmos, Inc. 4120 Commercial Center Drive, #400, Austin, TX 78744 [email protected], [email protected] Abstract Data retention in stored-charge based memories, such as Flash and EEPROMs, decreases with increasing temperature. Compensation for this shortening of retention time can be accomplished by refreshing the data using periodic erase-write refresh cycles, although the number of these cycles is limited by oxide integrity. An alternate approach is to use refresh cycles consisting of a rewrite only cycles, without the prior erase cycle. The viability of this approach requires that this refresh cycle induces less damage than an erase-write cycle. This paper studies the effects of repeated refresh cycles on oxide integrity in a high temperature environment and makes comparisons to the damage caused by erase-write cycles. The experiment consisted of running a large number of refresh cycles on a selected byte. The control group was other bytes which were not subjected to refresh only cycles. The oxide integrity was checked by performing repeated erase-write cycles on each of the two groups to determine if the refresh cycles decreased the number of erase-write cycles before failure. Data was collected from multiple parts, with different numbers of refresh cycles, and at temperatures ranging from 25C to 190C. The experiment was conducted on microcontrollers containing embedded EEPROM memories. The microcontrollers were programmed to test and measure their own memories, and to report the results to an external controller.
    [Show full text]
  • On the Effects of Data Corruption in Files
    Just One Bit in a Million: On the Effects of Data Corruption in Files Volker Heydegger Universität zu Köln, Historisch-Kulturwissenschaftliche Informationsverarbeitung (HKI), Albertus-Magnus-Platz, 50968 Köln, Germany [email protected] Abstract. So far little attention has been paid to file format robustness, i.e., a file formats capability for keeping its information as safe as possible in spite of data corruption. The paper on hand reports on the first comprehensive research on this topic. The research work is based on a study on the status quo of file format robustness for various file formats from the image domain. A controlled test corpus was built which comprises files with different format characteristics. The files are the basis for data corruption experiments which are reported on and discussed. Keywords: digital preservation, file format, file format robustness, data integ- rity, data corruption, bit error, error resilience. 1 Introduction Long-term preservation of digital information is by now and will be even more so in the future one of the most important challenges for digital libraries. It is a task for which a multitude of factors have to be taken into account. These factors are hardly predictable, simply due to the fact that digital preservation is something targeting at the unknown future: Are we still able to maintain the preservation infrastructure in the future? Is there still enough money to do so? Is there still enough proper skilled manpower available? Can we rely on the current legal foundation in the future as well? How long is the tech- nology we use at the moment sufficient for the preservation task? Are there major changes in technologies which affect the access to our digital assets? If so, do we have adequate strategies and means to cope with possible changes? and so on.
    [Show full text]
  • PROTECTING DATA from RANSOMWARE and OTHER DATA LOSS EVENTS a Guide for Managed Service Providers to Conduct, Maintain and Test Backup Files
    PROTECTING DATA FROM RANSOMWARE AND OTHER DATA LOSS EVENTS A Guide for Managed Service Providers to Conduct, Maintain and Test Backup Files OVERVIEW The National Cybersecurity Center of Excellence (NCCoE) at the National Institute of Standards and Technology (NIST) developed this publication to help managed service providers (MSPs) improve their cybersecurity and the cybersecurity of their customers. MSPs have become an attractive target for cyber criminals. When an MSP is vulnerable its customers are vulnerable as well. Often, attacks take the form of ransomware. Data loss incidents—whether a ransomware attack, hardware failure, or accidental or intentional data destruction—can have catastrophic effects on MSPs and their customers. This document provides recommend- ations to help MSPs conduct, maintain, and test backup files in order to reduce the impact of these data loss incidents. A backup file is a copy of files and programs made to facilitate recovery. The recommendations support practical, effective, and efficient back-up plans that address the NIST Cybersecurity Framework Subcategory PR.IP-4: Backups of information are conducted, maintained, and tested. An organization does not need to adopt all of the recommendations, only those applicable to its unique needs. This document provides a broad set of recommendations to help an MSP determine: • items to consider when planning backups and buying a backup service/product • issues to consider to maximize the chance that the backup files are useful and available when needed • issues to consider regarding business disaster recovery CHALLENGE APPROACH Backup systems implemented and not tested or NIST Interagency Report 7621 Rev. 1, Small Business planned increase operational risk for MSPs.
    [Show full text]