JFFS and JFFS2

Total Page:16

File Type:pdf, Size:1020Kb

JFFS and JFFS2 EE382N-4 Advanced Microcontroller Systems Flash Memory Technology and Flash Based File Systems Mark McDermott Spring 2021 EE382N-4 Class Notes File Systems for Embedded Linux EE382N-4 Class Notes Embedded file systems § RAMDISKS – Provide a mechanism for Linux to emulates a hard disk using volatile memory • Need a ramdisk when you do not have traditional storage device such as a hard disk or flash – Provide a mechanism by which you can load an actual file system into memory and use it as root file system – ‘initrd’ provides a mechanism by which a boot loader loads kernel image along with a root file system into memory 3/23/21 EE382N-4 Class Notes 3 Embedded file systems (cont) § RAMFS (RAM File System) – Typically, an embedded system will have files that are not useful after a reboot. These file would be normally stored in /tmp directory – Storing these files in memory is a better option than storing them in non- volatile memory because writes are expensive and wear out the flash. 3/23/21 EE382N-4 Class Notes 4 Embedded file systems (cont) § CRAMFS (Compressed RAM File System) – Useful file system for flash storage – Introduced in the 2.4 kernel – High-compression read only “regular” file system • Accessed via buffer cache using block device holding actual data – Enabled using MTD block device driver mode – CRAMFS uses ‘zlib’ routines for compression and it does compression for every 4 KB block – Using a program called mkcramfs, we can create CRAMFS image that needs to be burned in flash 3/23/21 EE382N-4 Class Notes 5 Embedded file systems (cont) § Journaling Flash File System – JFFS and JFFS2 – A write to a file creates a log that records the offset in the file to where the data is written and size of data written along with actual data – When a file needs to be read, the logs are played back and using the data size and offset, the file gets recreated – Requires garbage collection. As time proceeds, certain logs become obsolete partially and/or totally and must be cleaned. 3/23/21 EE382N-4 Class Notes 6 Embedded file systems (cont) § Main features of the JFFS2 file system – Management of erase blocks • Erase blocks are placed in three list: clean, dirty and free. Clean list contains only valid logs. Dirty list contains obsolete logs to be purged when garbage collection is called. Free list contains no logs and will be used for storing new logs. – Garbage collection • Work as a separate thread when gets started when we mount a JFFS2 file system • For every 99 out of 100 times, this thread will pick up a block from dirty list and for the other 1 out of 100 will pick up a block from clean list to ensure wear leveling • JFFS2 reserves 5 blocks for doing garbage collection – Compression • JFFS2 uses various compression routines including zlib and rubin – JFSS and JFSS2 file system image is created using mkfs.jffs and mkfs.jffs2 3/23/21 EE382N-4 Class Notes 7 Embedded file systems (cont) § NFS (Network File System) – Popular desktop file system EXT2 and EXT3 are exportable to an embedded system using NFS • Access it as a root file system on embedded system – During debugging stage, often developers can make changes to root file system. In such a case writing to flash can be costly (because flash has limited write cycles) and time consuming. NFS can be a good choice provided the network driver is ready – With NFS you do not have any size restrictions because all the storage is done on the remote server. – Linux kernel provides a mechanism for automatically mounting a file system using NFS at boot time using following steps • config options CONFIG_NFS_FS, which allow you to mount a file system from a remote server and CONFIG_ROOT_NFS, which allows you to use NFS as root file system needs to be turned on at build time 3/23/21 EE382N-4 Class Notes 8 Embedded file systems (cont) § PROC file system – A logical file system used by kernel to export its information to external world – Proc file system can be used for system monitoring, debugging and tuning – Proc files are created on the fly when they get opened. – Proc files can be read-only or read-write depending on the information exported by files – Read-only proc files can be used to get some kernel status and cannot be modified at runtime. (ex) process status information – Read-write proc files contain information that can be changed at runtime. (ex) TCP keep-alive time. 3/23/21 EE382N-4 Class Notes 9 The EXT Family § EXT was written by Remy Card in 1992 and was loosely based on UFS (Unix File System). – It had a 2GB file size max. Th e original Linu x file syste m wa s § EXT2 replaced EXT in 1993. th e m inix file syste m . § EXT3 emerged in 2001 § EXT4 was available in 2006 3/23/21 EE382N-4 Class Notes 10 The venerable EXT2 § The file size is dependent on the block size. A 1KB block size allows a 16GB file size § By increasing the block size you can have files up to 2TB § Directories are not indexed Inode # Access Extended Control List Attribute § File compression can be added to the file system Direct/Indirect Time stamp(3) Number of Disk Blocks Access,change, blocks § Files and directories are modification stored in the Inodes. File Deletion File generation File Size Time number § Time stamps are 32-bits which mean they go negative File Type Group Number of links 1/18/2038 Owner Permissions Status flags 3/23/21 EE382N-4 Class Notes 11 EXT3 adds journaling to EXT2 § Stephen Tweedie introduced EXT3 in 1998 and was included in 2.4.15 kernel in 2001. § EXT3 is an in-place upgrade to EXT2. § This is essentially still an EXT2 file system. § The big benefit is that a journal is maintained in metadata for subsequent FSCKs 3/23/21 EE382N-4 Class Notes 12 EXT4 – major upgrade to EXT2/3 § EXT4 was originally part of the Cluster File System § Ted Tso brought EXT4 into the 2.6.28 kernel for development in 2008. § Google used ext4 for its storage systems in 2010, and enabled it on Android later that year. 3/23/21 EE382N-4 Class Notes 13 EXT4 – New features § Large file systems: – Volumes up to 1 exbibyte (EiB) File sizes up to 16 tebibytes (TiB) – Note that Red Hat recommends using XFS for volumes larger than 100TB. System uses 48 bit addressing – Time stamp has a date limitation of April 25, 2514. – Online defragmentation § Additional features – Inode is 256KB where EXT3 was 128KB – Persistent pre-allocation. Extra space is reserved. – Backwards compatibility. An EXT2 or EXT3 volume can be mounted as an EXT4 volume. This will slightly improve performance. – Extent-based storage 3/23/21 EE382N-4 Class Notes 14 EXT4 – Performance Features § EXT4 has a number of performance- related features – Multi-block allocation where multiple blocks can be written at once – Delayed Allocation – This allows multiple block writes, reduced fragmentation, reduced processor time for files that are short-term or temporary. 3/23/21 EE382N-4 Class Notes 15 BTRFS1 – The Linux file system of the future § BTRFS is a new copy on write (CoW) filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration. Jointly developed at multiple companies, BTRFS is licensed under the GPL and open for contribution from anyone. § BTRFS was initially developed by Chris Mason who had experience working on the Reiser File System § Currently default in OpenSuse 13.2. Will be the default in Fedora 23 1 BTRFS, an abbreviation for b-tree file system, is a file system based on the copy-on-write principle, initially designed at Oracle Corporation for use in Linux. EE382N-4 Class Notes 16 BTRFS Details § Max. file size: 16 EiB § File system permissions: POSIX and ACL § Max. number of files: 264 § File allocation: Extents § Supported operating systems: Linux, ReactOS § Max. volume size: 16 EiB 3/23/21 EE382N-4 Class Notes 17 BTRFS - Features § BTRFS has the following features – Extent-based storage – 16 Exibyte maximum file size (2^64) Space efficient packing for small files. Space efficient indexed directories Dynamic inode allocation – Writeable and read-only snapshots Subvolumes § Checksums on data and metadata Compression § Integrated multiple device support – Striping – Mirroring – Striping + mirroring – Striping with single and dual parity § SSD awareness § Efficient incremental backup EE382N-4 Class Notes 18 BTRFS - Features § Conversion of EXT3 or EXT4 file systems § Seed devices – essentially a template § Sub-volume aware quota support § Send/receive sub volume changes – Efficient incremental filesystem mirroring § Batch deduplication (happens after writing) EE382N-4 Class Notes 19 Flash Memory Technology EE382N-4 Class Notes 10/15/193/23/21 EE382N-4 Class Notes 21191921 Semiconductor Memory Types Non-Volatile Volatile Semiconductor Memory RAM DRAM SRAM Floating Gate Nitride Emerging ROM & Fuse “NV Ram” Ferro-electric Magnetic Phase Change Polymer “Unified Mem” Courtesy Intel 3/23/21 EE382N-4 Class Notes 22 Non-Volatile Memory History MRAM, Phase Change, ?? Polymer 2000’s Ferro-electric 1988 Nitride Storage 2002 MLC NOR Flash 1995 NOR Flash 1988 AND, DiNOR Flash 1990’s EPROM 1971 MLC NAND Flash 1996 NAND Flash 1985 EEPROM Bipolar ROMS/PROMS 1980 àLate 60’s 1970 1980 1990 2000 MLC = Multi-Level Cell Courtesy Intel 3/23/21 EE382N-4 Class Notes 23 Non-Volatile Memory Terminology § Program: Store charge on a floating gate § Erase: Remove charge from the floating gate § Data Retention: The longest time for the NVM to keep the stored information § Endurance: The number of Program/Erase Cycles the memory can withstand § Disturbs: A mechanism that corrupts the stored data in the memory cell § Memory Cell: Device that contains the memory storage element § Memory Array: Arrangement of Memory cells Courtesy Intel 3/23/21 EE382N-4 Class Notes 24 Flash memory § Extension of EEPROM – Same floating gate principle – Same write ability and storage permanence § Fast erase – Large blocks of memory erased at once, rather than one word at a time – Blocks typically several thousand bytes large § Flash memory wears out during writing.
Recommended publications
  • Development of a Verified Flash File System ⋆
    Development of a Verified Flash File System ? Gerhard Schellhorn, Gidon Ernst, J¨orgPf¨ahler,Dominik Haneberg, and Wolfgang Reif Institute for Software & Systems Engineering University of Augsburg, Germany fschellhorn,ernst,joerg.pfaehler,haneberg,reifg @informatik.uni-augsburg.de Abstract. This paper gives an overview over the development of a for- mally verified file system for flash memory. We describe our approach that is based on Abstract State Machines and incremental modular re- finement. Some of the important intermediate levels and the features they introduce are given. We report on the verification challenges addressed so far, and point to open problems and future work. We furthermore draw preliminary conclusions on the methodology and the required tool support. 1 Introduction Flaws in the design and implementation of file systems already lead to serious problems in mission-critical systems. A prominent example is the Mars Explo- ration Rover Spirit [34] that got stuck in a reset cycle. In 2013, the Mars Rover Curiosity also had a bug in its file system implementation, that triggered an au- tomatic switch to safe mode. The first incident prompted a proposal to formally verify a file system for flash memory [24,18] as a pilot project for Hoare's Grand Challenge [22]. We are developing a verified flash file system (FFS). This paper reports on our progress and discusses some of the aspects of the project. We describe parts of the design, the formal models, and proofs, pointing out challenges and solutions. The main characteristic of flash memory that guides the design is that data cannot be overwritten in place, instead space can only be reused by erasing whole blocks.
    [Show full text]
  • LSP 1.20 Davinci Linux NOR Flash Device Driver
    LSP 1.20 DaVinci Linux NOR Flash Driver User's Guide Literature Number: SPRUF10 April 2008 2 SPRUF10–April 2008 Submit Documentation Feedback Contents 1 Overview............................................................................................................................. 5 1.1 System Requirements .................................................................................................... 5 1.2 Design Overview .......................................................................................................... 6 2 Installation Guide................................................................................................................. 7 2.1 List of Installable Components .......................................................................................... 7 2.2 Component Folder ........................................................................................................ 7 2.3 Development Tools ....................................................................................................... 7 2.4 Build......................................................................................................................... 8 2.5 Steps to Load/Unload the NOR Flash Driver.......................................................................... 8 3 NOR Flash Driver Porting...................................................................................................... 9 3.1 Customizing the NOR-flash partitions .................................................................................
    [Show full text]
  • Study of File System Evolution
    Study of File System Evolution Swaminathan Sundararaman, Sriram Subramanian Department of Computer Science University of Wisconsin {swami, srirams} @cs.wisc.edu Abstract File systems have traditionally been a major area of file systems are typically developed and maintained by research and development. This is evident from the several programmer across the globe. At any point in existence of over 50 file systems of varying popularity time, for a file system, there are three to six active in the current version of the Linux kernel. They developers, ten to fifteen patch contributors but a single represent a complex subsystem of the kernel, with each maintainer. These people communicate through file system employing different strategies for tackling individual file system mailing lists [14, 16, 18] various issues. Although there are many file systems in submitting proposals for new features, enhancements, Linux, there has been no prior work (to the best of our reporting bugs, submitting and reviewing patches for knowledge) on understanding how file systems evolve. known bugs. The problems with the open source We believe that such information would be useful to the development approach is that all communication is file system community allowing developers to learn buried in the mailing list archives and aren’t easily from previous experiences. accessible to others. As a result when new file systems are developed they do not leverage past experience and This paper looks at six file systems (Ext2, Ext3, Ext4, could end up re-inventing the wheel. To make things JFS, ReiserFS, and XFS) from a historical perspective worse, people could typically end up doing the same (between kernel versions 1.0 to 2.6) to get an insight on mistakes as done in other file systems.
    [Show full text]
  • DASH: Database Shadowing for Mobile DBMS
    DASH: Database Shadowing for Mobile DBMS Youjip Won1 Sundoo Kim2 Juseong Yun2 Dam Quang Tuan2 Jiwon Seo2 1KAIST, Daejeon, Korea 2Hanyang University, Seoul, Korea [email protected] [email protected] ABSTRACT 1. INTRODUCTION In this work, we propose Database Shadowing, or DASH, Crash recovery is a vital part of DBMS design. Algorithms which is a new crash recovery technique for SQLite DBMS. for crash recovery range from naive full-file shadowing [15] DASH is a hybrid mixture of classical shadow paging and to the sophisticated ARIES protocol [38]. Most enterprise logging. DASH addresses four major issues in the current DBMS's, e.g., IBM DB2, Informix, Micrsoft SQL and Oracle SQLite journal modes: the performance and write amplifi- 8, use ARIES or its variants for efficient concurrency control. cation issues of the rollback mode and the storage space re- SQLite is one of the most widely used DBMS's. It is quirement and tail latency issues of the WAL mode. DASH deployed on nearly all computing platform such as smart- exploits two unique characteristics of SQLite: the database phones (e.g, Android, Tizen, Firefox, and iPhone [52]), dis- files are small and the transactions are entirely serialized. tributed filesystems (e.g., Ceph [58] and Gluster filesys- DASH consists of three key ingredients Aggregate Update, tem [1]), wearable devices (e.g., smart watch [4, 21]), and Atomic Exchange and Version Reset. Aggregate Update elim- automobiles [19, 55]. As a library-based embedded DBMS, inates the redundant write overhead and the requirement to SQLite deliberately adopts a basic transaction management maintain multiple snapshots both of which are inherent in and crash recovery scheme.
    [Show full text]
  • Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C
    Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau, Michael M. Swift Computer Sciences Department, University of Wisconsin, Madison Abstract and most complex code bases in the kernel. Further, We introduce Membrane, a set of changes to the oper- file systems are still under active development, and new ating system to support restartable file systems. Mem- ones are introduced quite frequently. For example, Linux brane allows an operating system to tolerate a broad has many established file systems, including ext2 [34], class of file system failures and does so while remain- ext3 [35], reiserfs [27], and still there is great interest in ing transparent to running applications; upon failure, the next-generation file systems such as Linux ext4 and btrfs. file system restarts, its state is restored, and pending ap- Thus, file systems are large, complex, and under develop- plication requests are serviced as if no failure had oc- ment, the perfect storm for numerous bugs to arise. curred. Membrane provides transparent recovery through Because of the likely presence of flaws in their imple- a lightweight logging and checkpoint infrastructure, and mentation, it is critical to consider how to recover from includes novel techniques to improve performance and file system crashes as well. Unfortunately, we cannot di- correctness of its fault-anticipation and recovery machin- rectly apply previous work from the device-driver litera- ery. We tested Membrane with ext2, ext3, and VFAT. ture to improving file-system fault recovery. File systems, Through experimentation, we show that Membrane in- unlike device drivers, are extremely stateful, as they man- duces little performance overhead and can tolerate a wide age vast amounts of both in-memory and persistent data; range of file system crashes.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • Redbooks Paper Linux on IBM Zseries and S/390
    Redbooks Paper Simon Williams Linux on IBM zSeries and S/390: TCP/IP Broadcast on z/VM Guest LAN Preface This Redpaper provides information to help readers plan for and exploit Internet Protocol (IP) broadcast support that was made available to z/VM Guest LAN environments with the introduction of the z/VM 4.3 Operating System. Using IP broadcast support, Linux guests can for the first time use DHCP to lease an IP address dynamically from a DHCP server in a z/VM Guest LAN environment. This frees the administrator from the previous method of having to hardcode an IP address for every Linux guest in the system. This new feature enables easier deployment and administration of large-scale Linux environments. Objectives The objectives of this paper are to: Review the z/VM Guest LAN environment Explain IP broadcast Introduce the Dynamic Host Configuration Protocol (DHCP) Explain how DHCP works in a z/VM Guest LAN Describe how to implement DHCP in a z/VM Guest LAN environment © Copyright IBM Corp. 2003. All rights reserved. ibm.com/redbooks 1 z/VM Guest LAN Attention: While broadcast support for z/VM Guest LANs was announced with the base z/VM 4.3 operating system, the user must apply the PTF for APAR VM63172. This APAR resolves several issues which have been found to inhibit the use of DHCP by Linux-based applications running over the z/VM Guest LAN (in simulated QDIO mode). Introduction Prior to z/VM 4.2, virtual connectivity options for connecting one or more virtual machines (VM guests) was limited to virtual channel-to-channel adapters (CTCA) and the Inter-User Communications Vehicle (IUCV) facility.
    [Show full text]
  • Ext4 File System and Crash Consistency
    1 Ext4 file system and crash consistency Changwoo Min 2 Summary of last lectures • Tools: building, exploring, and debugging Linux kernel • Core kernel infrastructure • Process management & scheduling • Interrupt & interrupt handler • Kernel synchronization • Memory management • Virtual file system • Page cache and page fault 3 Today: ext4 file system and crash consistency • File system in Linux kernel • Design considerations of a file system • History of file system • On-disk structure of Ext4 • File operations • Crash consistency 4 File system in Linux kernel User space application (ex: cp) User-space Syscalls: open, read, write, etc. Kernel-space VFS: Virtual File System Filesystems ext4 FAT32 JFFS2 Block layer Hardware Embedded Hard disk USB drive flash 5 What is a file system fundamentally? int main(int argc, char *argv[]) { int fd; char buffer[4096]; struct stat_buf; DIR *dir; struct dirent *entry; /* 1. Path name -> inode mapping */ fd = open("/home/lkp/hello.c" , O_RDONLY); /* 2. File offset -> disk block address mapping */ pread(fd, buffer, sizeof(buffer), 0); /* 3. File meta data operation */ fstat(fd, &stat_buf); printf("file size = %d\n", stat_buf.st_size); /* 4. Directory operation */ dir = opendir("/home"); entry = readdir(dir); printf("dir = %s\n", entry->d_name); return 0; } 6 Why do we care EXT4 file system? • Most widely-deployed file system • Default file system of major Linux distributions • File system used in Google data center • Default file system of Android kernel • Follows the traditional file system design 7 History of file system design 8 UFS (Unix File System) • The original UNIX file system • Design by Dennis Ritche and Ken Thompson (1974) • The first Linux file system (ext) and Minix FS has a similar layout 9 UFS (Unix File System) • Performance problem of UFS (and the first Linux file system) • Especially, long seek time between an inode and data block 10 FFS (Fast File System) • The file system of BSD UNIX • Designed by Marshall Kirk McKusick, et al.
    [Show full text]
  • O'reilly Linux Kernel in a Nutshell.Pdf
    ,title.4229 Page i Friday, December 1, 2006 9:52 AM LINUX KERNEL IN A NUTSHELL ,title.4229 Page ii Friday, December 1, 2006 9:52 AM Other Linux resources from O’Reilly Related titles Building Embedded Linux Running Linux Systems Understanding Linux Linux Device Drivers Network Internals Linux in a Nutshell Understanding the Linux Linux Pocket Guide Kernel Linux Books linux.oreilly.com is a complete catalog of O’Reilly’s Resource Center books on Linux and Unix and related technologies, in- cluding sample chapters and code examples. Conferences O’Reilly brings diverse innovators together to nurture the ideas that spark revolutionary industries. We spe- cialize in documenting the latest tools and systems, translating the innovator’s knowledge into useful skills for those in the trenches. Visit conferences.oreilly.com for our upcoming events. Safari Bookshelf (safari.oreilly.com) is the premier on- line reference library for programmers and IT professionals. Conduct searches across more than 1,000 books. Subscribers can zero in on answers to time-critical questions in a matter of seconds. Read the books on your Bookshelf from cover to cover or sim- ply flip to the page you need. Try it today for free. ,title.4229 Page iii Friday, December 1, 2006 9:52 AM LINUX KERNEL IN A NUTSHELL Greg Kroah-Hartman Beijing • Cambridge • Farnham • Köln • Paris • Sebastopol • Taipei • Tokyo ,LKNSTOC.fm.8428 Page v Friday, December 1, 2006 9:55 AM Chapter 1 Table of Contents Preface . ix Part I. Building the Kernel 1. Introduction . 3 Using This Book 4 2. Requirements for Building and Using the Kernel .
    [Show full text]
  • CS 152: Computer Systems Architecture Storage Technologies
    CS 152: Computer Systems Architecture Storage Technologies Sang-Woo Jun Winter 2019 Storage Used To be a Secondary Concern Typically, storage was not a first order citizen of a computer system o As alluded to by its name “secondary storage” o Its job was to load programs and data to memory, and disappear o Most applications only worked with CPU and system memory (DRAM) o Extreme applications like DBMSs were the exception Because conventional secondary storage was very slow o Things are changing! Some (Pre)History Magnetic core memory Rope memory (ROM) 1960’s Drum memory 1950~1970s 72 KiB per cubic foot! 100s of KiB (1024 bits in photo) Hand-woven to program the 1950’s Apollo guidance computer Photos from Wikipedia Some (More Recent) History Floppy disk drives 1970’s~2000’s 100 KiBs to 1.44 MiB Hard disk drives 1950’s to present MBs to TBs Photos from Wikipedia Some (Current) History Solid State Drives Non-Volatile Memory 2000’s to present 2010’s to present GB to TBs GBs Hard Disk Drives Dominant storage medium for the longest time o Still the largest capacity share Data organized into multiple magnetic platters o Mechanical head needs to move to where data is, to read it o Good sequential access, terrible random access • 100s of MB/s sequential, maybe 1 MB/s 4 KB random o Time for the head to move to the right location (“seek time”) may be ms long • 1000,000s of cycles! Typically “ATA” (Including IDE and EIDE), and later “SATA” interfaces o Connected via “South bridge” chipset Ding Yuan, “Operating Systems ECE344 Lecture 11: File
    [Show full text]
  • Hardware-Driven Evolution in Storage Software by Zev Weiss A
    Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had.
    [Show full text]
  • AMD Alchemy™ Processors Building a Root File System for Linux® Incorporating Memory Technology Devices
    AMD Alchemy™ Processors Building a Root File System for Linux® Incorporating Memory Technology Devices 1.0 Scope This document outlines a step-by-step process for building and deploying a Flash-based root file system for Linux® on an AMD Alchemy™ processor-based development board, using an approach that incorporates Memory Technology Devices (MTDs) with the JFFS2 file system. Note: This document describes creating a root file system on NOR Flash memory devices, and does not apply to NAND Flash devices. 1.1 Journaling Flash File System JFFS2 is the second generation of the Journaling Flash File System (JFFS). This file system provides a crash-safe and powerdown-safe Linux file system for use with Flash memory devices. The home page for the JFFS project is located at http://developer.axis.com/software/jffs. 1.2 Memory Technology Device The MTD subsystem provides a generic Linux driver for a wide range of memory devices, including Flash memory devices. This driver creates an abstracted device used by JFFS2 to interface to the actual Flash memory hardware. The home page for the MTD project is located at http://www.linux-mtd.infradead.org. 2.0 Building the Root File System Before being deployed to an AMD Alchemy platform, the file system must first be built on an x86 Linux host PC. The pri- mary concern when building a Flash-based root file system is often the size of the image. The file system must be designed so that it fits within the available space of the Flash memory, with enough extra space to accommodate any runtime-created files, such as temporary or log files.
    [Show full text]