Mirroring Vs

Total Page:16

File Type:pdf, Size:1020Kb

Mirroring Vs IJCSMS (International Journal of Computer Science & Management Studies) Vol. 14, Issue 11 Publishing Month: November 2014 (An Indexed and Referred Journal) ISSN (Online): 2231 –5268 www.ijcsms.com File Systems Performance on Solid-State Drive Julian Fejzaj1, Kristo Kapshtica2, Denis Saatciu3, Igli Tafa4 and Endri Xhina5 1 Department of Informatics, Faculty of Natural Sciences, University of Tirana [email protected] 2Computer Enginnering Department, Faculty of Information and Technology, Polytechnic University2 3 Department of Informatics, Faculty of Natural Sciences, University of Tirana [email protected] 4 Department of Informatics, Faculty of Natural Sciences, University of Tirana [email protected] 5Department of Informatics, Faculty of Natural Sciences, University of Tirana [email protected] Abstract drive used NAND-based flash memory that means is Most people have to make a decision for their computer to non-volatile so you can switch off it and the disk can choice between Solid State Drive and Hard Disk Drive as a “remind” all the data stored on it after hundred years data storage. But, who would not like a data storage with unlike of the HDD wich can lost data after olnly a few several ways to speed up their computer moreover with low years. As a conlusion the data storage on SSD can live enough price such as SSD (Solid-state Drive) that is a data more than you. Based on their architechture SSD storage device wich has the same functionality as a hard disk drive (HDD). But we need to know moreover I’m interested to consume less power(less 2W vs 6W for an HDD) know wich of file systems have better performance on SSD because not use electricity to rotate the platters like a architectures. Aprops I will demonstrate this using Linux hard disk and consequently no more heat and noise (that operating system and Linux file systems using Ubuntu means more battery for notebook). Usage of NAND- describet below . It’s important to know wich file system we based flash memory store data withour power. For must use for our SSD. applications requiring fast access, but not necessarily Keywords: linux file systems, solid state drive, and ssd, data persistence after power loss. Such devices may bonnie++ employ separate power sources, such as batteries, to maintain data after power loss. As I said previously SSD 1. Introduction it’s a memory cheap wich is constructed from integrated circuits (controller, cache and capacitor)with an A solid-state drive is mechanically, electrically and interface conector and this make the SSD more software compatible with a conventional hard drive[4]. lighter than HDD wich contains platter(rotating SSD vs HDD The difference is that the storage is not disk,spindle and motor)[8][9][10]. On the other side magnetic (HDD) or optical (CD) but solid state HDD catches a capacity of 500MB-2TB better than semiconductor such as RAM, PRAM or other SSD with not larger than 512 MB capacity. But the electrically erasable RAM. This provides faster access biggest disadvantages for solid state drive is a huge than hard drive because the SSD data can be randomly difference in price, so the SSD costs 1$ per gigabyte accessed in the same time whatever the storage location compared with 0.075$ per gigabyte on HDD[5][6]. that they may have[4]. Solid state drive store However solid-state drives is being introduced powerful information in microchip like a memory stick, so not on the territory raised strong for decades by hard drive. have moving parts such as hard drive disk that used a As a conclusion if your money are secondary and your mechanical arm wich read information fromstorage computer performance, fast bootable ect. are primary I platter with read and write head moving around. This suggest you to use Solide-State Drive. architecture make the SSD faster than HDD. Solid state IJCSMS www.ijcsms.com 6 IJCSMS (International Journal of Computer Science & Management Studies) Vol. 14, Issue 11 Publishing Month: November 2014 (An Indexed and Referred Journal) ISSN (Online): 2231 –5268 www.ijcsms.com 2. Related Works As we know except linux operating system in our case there are some other Operating Systems such as Windows, Mac, ect. Each of these have different file systems like fat, fat32, ntfs for windows and hfs+ also on mac. The different operating systems have different ways to test wich of their file systems heve better performance on solid-state drive disk. One of this ways is the work [12] made by Patrick Schmid and Achim Roos who have tested windows file systems like fat32, ntfs and exFAT with various programs wich are ‘as ssd’, ‘cristal disk mark’, ‘iometer’ and ‘pcmark 7’, more over they have been used two different ssd for great accuracy. As a conclusion almost all of these programs have the same results for file systems regardless of different SSDs. Another work but this time in lunux operating system wich gave me a great assistance is the benchmarking test made by Phoronix Media[7] with their program called PTS (phoronix test sute) wich work on linux and serves to test the computer hardware. In this work they have tested the most usable file systems on linux like a ext4, btrfs, xts and raiserfs concerning on read/write performance of data, creation Figure 1. Kernel-user architecture and deletion of files or data, synchronisation, number of threats, disk transactions ect. Phoronix also have been On user space are located the application and dhe glibc use two SSDs to increase accurancy of the test and both (provides the user interface for file system call: open, SSDs had almost the same results in all the above read, write, close). The system call interface work like a test.[7] Differently from phoronix I have been use switch: make a relationship and pass system calls from bonnie++ for testing my Crucial SSD because is an user space to the appropriate endpoints in kernel open source and more specific for linux file systems on system[1]. Virtual File System(VFS) exports a group of storage disk. interfaces and separate them to the individual file systems wich proceed differently from one other. There are two caches for file systems object inodes and 3. Theoritical Phase directory wich provides a stack of file system objects used recently. Individual file systems like a ext4, nilfs, We know that the most file system code exists in xfs ect. expots a grup of interfaces that is used by VFS. the kernel space and a part on user space. Below is The buffer cashe managed as group of LRU(least shown the architecture of relationship between file recently uset) lists. This enable requests between file systems in kernel and user space. systems and device drivers that they manipulate such as read and write request for faster access. 3.1. File Systems A file system is an organization of data and metadata that an operating systems uses to keep track of file on a storage device. The system used in this paper is Linux and in this case bring to my mind that phrase: "On a UNIX system, everything is a file; if something is not a file, it is a process." The proc file system (pseudo- filesystem which provides an interface to kernel data structure) is mounted on /proc we can find it in the file IJCSMS www.ijcsms.com - 7 - IJCSMS (International Journal of Computer Science & Management Studies) Vol. 14, Issue 11 Publishing Month: November 2014 (An Indexed and Referred Journal) ISSN (Online): 2231 –5268 www.ijcsms.com /proc/filesystem wich file systems currently supports XFS is a journaling filesystem, developed by SGI to 64- our kernel. In order to use them, we have to mount it. bit file system. Was designed to maintain high performance with large files, that was integrated into Linux in kernel 2.4.20. JFS is a journaling filesystem, developed by IBM to work in high performance environments, that was integrated into Linux in kernel 2.4.24. ntfs include a number of userspace called ntfsprogs such as mkntfs, ntfsundelete and ntfsresize btrfs is a new copy on write filesystem for Linux aimed at implementing advanced features while focusing on fault tolerance, repair and easy administration nilfs is a new implementation of a log-structured file system (LFS) supporting continuous snapshotting[2][3]. zfs can support access control list (ACLs), was designed by Sun Microsystem and include file system and logical volume manager 4. Experimental Phase In this phase I demonstrate an experiment wich will test Figure 2. Linux file systems same of most available file systems and wich of those have a better performance on Solid-State Drive Below I present a short description for some of the most available filesystems: A. Hardware environment minix is the filesystem used in the Minix operating system. The oldest but the most reliable. This file Hardware environment wich I used has these parameters: systems is quite limited in features. It remains useful for CPU: Intel(R) Core(TM) i3 CPU 2.4GHz (4 floppies and RAM disks. CPUs) ext is a modification of the minix that lifts the limits RAM: 4GB Kinston on the filesystem size. Its not very popular but work SSD: 128 GB Crucial, SATA-3 well. Has been removed from the kernel (in 2.1.21). ext2 is the most featureful and the high performance B. Software environment disk filesystem for fixed disks as well as removable media. Was designed as an extension of ext to be easely As I mentioned before the operating system that I have compatible that means the new version of the filesystem been used is Ubuntu 12.04 LTS and Bonnie++ as a doesn’t order remaking the existing filesystem .
Recommended publications
  • CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N
    CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N. Gutierrez Center for Education and Research Information Assurance and Security Purdue University, West Lafayette, IN 47907-2086 DECEPTIVE MEMORY SYSTEMS ADissertation Submitted to the Faculty of Purdue University by Christopher N. Gutierrez In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2017 Purdue University West Lafayette, Indiana ii THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL Dr. Eugene H. Spa↵ord, Co-Chair Department of Computer Science Dr. Saurabh Bagchi, Co-Chair Department of Computer Science Dr. Dongyan Xu Department of Computer Science Dr. Mathias Payer Department of Computer Science Approved by: Dr. Voicu Popescu by Dr. William J. Gorman Head of the Graduate Program iii This work is dedicated to my wife, Gina. Thank you for all of your love and support. The moon awaits us. iv ACKNOWLEDGMENTS Iwould liketothank ProfessorsEugeneSpa↵ord and SaurabhBagchi for their guidance, support, and advice throughout my time at Purdue. Both have been instru­ mental in my development as a computer scientist, and I am forever grateful. I would also like to thank the Center for Education and Research in Information Assurance and Security (CERIAS) for fostering a multidisciplinary security culture in which I had the privilege to be part of. Special thanks to Adam Hammer and Ronald Cas­ tongia for their technical support and Thomas Yurek for his programming assistance for the experimental evaluation. I am grateful for the valuable feedback provided by the members of my thesis committee, Professor Dongyen Xu, and Professor Math­ ias Payer.
    [Show full text]
  • Ext4 File System and Crash Consistency
    1 Ext4 file system and crash consistency Changwoo Min 2 Summary of last lectures • Tools: building, exploring, and debugging Linux kernel • Core kernel infrastructure • Process management & scheduling • Interrupt & interrupt handler • Kernel synchronization • Memory management • Virtual file system • Page cache and page fault 3 Today: ext4 file system and crash consistency • File system in Linux kernel • Design considerations of a file system • History of file system • On-disk structure of Ext4 • File operations • Crash consistency 4 File system in Linux kernel User space application (ex: cp) User-space Syscalls: open, read, write, etc. Kernel-space VFS: Virtual File System Filesystems ext4 FAT32 JFFS2 Block layer Hardware Embedded Hard disk USB drive flash 5 What is a file system fundamentally? int main(int argc, char *argv[]) { int fd; char buffer[4096]; struct stat_buf; DIR *dir; struct dirent *entry; /* 1. Path name -> inode mapping */ fd = open("/home/lkp/hello.c" , O_RDONLY); /* 2. File offset -> disk block address mapping */ pread(fd, buffer, sizeof(buffer), 0); /* 3. File meta data operation */ fstat(fd, &stat_buf); printf("file size = %d\n", stat_buf.st_size); /* 4. Directory operation */ dir = opendir("/home"); entry = readdir(dir); printf("dir = %s\n", entry->d_name); return 0; } 6 Why do we care EXT4 file system? • Most widely-deployed file system • Default file system of major Linux distributions • File system used in Google data center • Default file system of Android kernel • Follows the traditional file system design 7 History of file system design 8 UFS (Unix File System) • The original UNIX file system • Design by Dennis Ritche and Ken Thompson (1974) • The first Linux file system (ext) and Minix FS has a similar layout 9 UFS (Unix File System) • Performance problem of UFS (and the first Linux file system) • Especially, long seek time between an inode and data block 10 FFS (Fast File System) • The file system of BSD UNIX • Designed by Marshall Kirk McKusick, et al.
    [Show full text]
  • CS 152: Computer Systems Architecture Storage Technologies
    CS 152: Computer Systems Architecture Storage Technologies Sang-Woo Jun Winter 2019 Storage Used To be a Secondary Concern Typically, storage was not a first order citizen of a computer system o As alluded to by its name “secondary storage” o Its job was to load programs and data to memory, and disappear o Most applications only worked with CPU and system memory (DRAM) o Extreme applications like DBMSs were the exception Because conventional secondary storage was very slow o Things are changing! Some (Pre)History Magnetic core memory Rope memory (ROM) 1960’s Drum memory 1950~1970s 72 KiB per cubic foot! 100s of KiB (1024 bits in photo) Hand-woven to program the 1950’s Apollo guidance computer Photos from Wikipedia Some (More Recent) History Floppy disk drives 1970’s~2000’s 100 KiBs to 1.44 MiB Hard disk drives 1950’s to present MBs to TBs Photos from Wikipedia Some (Current) History Solid State Drives Non-Volatile Memory 2000’s to present 2010’s to present GB to TBs GBs Hard Disk Drives Dominant storage medium for the longest time o Still the largest capacity share Data organized into multiple magnetic platters o Mechanical head needs to move to where data is, to read it o Good sequential access, terrible random access • 100s of MB/s sequential, maybe 1 MB/s 4 KB random o Time for the head to move to the right location (“seek time”) may be ms long • 1000,000s of cycles! Typically “ATA” (Including IDE and EIDE), and later “SATA” interfaces o Connected via “South bridge” chipset Ding Yuan, “Operating Systems ECE344 Lecture 11: File
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • Linux CVM Operation Manualproduct Introduction
    Cloud Virtual Machine Linux CVM Operation Manual Product Introduction Linux CVM Operation Manual Product Introduction Copyright Notice ©2013-2017 Tencent Cloud. All rights reserved. Copyright in this document is exclusively owned by Tencent Cloud. You must not reproduce, modify, copy or distribute in any way, in whole or in part, the contents of this document without Tencent Cloud's the prior written consent. Trademark Notice All trademarks associated with Tencent Cloud and its services are owned by Tencent Cloud Computing (Beijing) Company Limited and its affiliated companies. Trademarks of third parties referred to in this document are owned by their respective proprietors. Service Statement This document is intended to provide users with general information about Tencent Cloud's products and services only and does not form part of Tencent Cloud's terms and conditions. Tencent Cloud's products or services are subject to change. Specific products and services and the standards applicable to them are exclusively provided for in Tencent Cloud's applicable terms and conditions. ©2013-2017 Tencent Cloud. All rights reserved. Page 2 of 61 Linux CVM Operation Manual Product Introduction Contents Documentation Legal Notice ............................................................................................................................................ 2 Linux CVM Operation Manual ........................................................................................................................................... 4 Mounting Data
    [Show full text]
  • The Virtualization Cookbook
    z/VM and Linux on IBM System z The Virtualization Cookbook A cookbook for installing and customizing z/VM 5.2 and Linux SLES 10 on the mainframe Marian Gasparovic, Michael MacIsaac, Carlos Ordonez, Jin Xiong . Contents Preface . ix Summary of changes to August 2006 version . ix Summary of changes to February 2007 version . ix Conventions . .x The team that wrote this trilogy . xi Comments welcome. xi Notices . xii Trademarks . xiii Part 1. z/VM . 1 Chapter 1. Introduction to z/VM and Linux . 3 1.1 What is virtualization? . 4 1.2 A philosophy adopted in this book . 4 1.3 Choices and decisions made in this book . 4 1.4 IBM Director and z/VM Center Extension . 5 1.5 Infrastructure design . 5 1.6 Usability tests performed for this book . 6 1.7 The chapters in this book . 7 Chapter 2. Planning . 9 2.1 Bill of materials . 9 2.1.1 Hardware resources . 9 2.1.2 Software resources . 10 2.1.3 Networking resources . 10 2.2 z/VM conventions . 10 2.2.1 Volume labeling convention . 11 2.2.2 Backup file naming convention . 11 2.2.3 The command retrieve convention . 12 2.3 Disk planning. 12 2.4 Memory planning. 13 2.5 Password planning . 13 2.6 Planning worksheets . 14 2.6.1 z/VM resources used in this book . 14 2.6.2 z/VM DASD used in this book. 15 2.6.3 Linux resources used in this book. 16 2.6.4 Linux user IDs used in this book .
    [Show full text]
  • State-Of-The-Art Garbage Collection Policies for NILFS2
    State-of-the-art Garbage Collection Policies for NILFS2 DIPLOMARBEIT zur Erlangung des akademischen Grades Diplom-Ingenieur im Rahmen des Studiums Software Engineering/Internet Computing eingereicht von Andreas Rohner, Bsc Matrikelnummer 0502196 an der Fakultät für Informatik der Technischen Universität Wien Betreuung: Ao.Univ.Prof. Dipl.-Ing. Dr.techn. M. Anton Ertl Wien, 23. Jänner 2018 Andreas Rohner M. Anton Ertl Technische Universität Wien A-1040 Wien Karlsplatz 13 Tel. +43-1-58801-0 www.tuwien.ac.at State-of-the-art Garbage Collection Policies for NILFS2 DIPLOMA THESIS submitted in partial fulfillment of the requirements for the degree of Diplom-Ingenieur in Software Engineering/Internet Computing by Andreas Rohner, Bsc Registration Number 0502196 to the Faculty of Informatics at the TU Wien Advisor: Ao.Univ.Prof. Dipl.-Ing. Dr.techn. M. Anton Ertl Vienna, 23rd January, 2018 Andreas Rohner M. Anton Ertl Technische Universität Wien A-1040 Wien Karlsplatz 13 Tel. +43-1-58801-0 www.tuwien.ac.at Erklärung zur Verfassung der Arbeit Andreas Rohner, Bsc Grundsteingasse 43/22 Hiermit erkläre ich, dass ich diese Arbeit selbständig verfasst habe, dass ich die verwen- deten Quellen und Hilfsmittel vollständig angegeben habe und dass ich die Stellen der Arbeit – einschließlich Tabellen, Karten und Abbildungen –, die anderen Werken oder dem Internet im Wortlaut oder dem Sinn nach entnommen sind, auf jeden Fall unter Angabe der Quelle als Entlehnung kenntlich gemacht habe. Wien, 23. Jänner 2018 Andreas Rohner v Danksagung Ich danke meinem Betreuer Ao.Univ.Prof. Dipl.-Ing. Dr.techn. M. Anton Ertl für seine Geduld und Unterstützung. Außerdem schulde ich Ryusuke Konishi, dem Maintainer des NILFS2-Subsystems im Linux-Kernel, Dank für die Durchsicht meiner Patches und die vielen wertvollen Verbesserungsvorschläge.
    [Show full text]
  • Red Hat Enterprise Linux 8 Customizing Anaconda
    Red Hat Enterprise Linux 8 Customizing Anaconda Changing the installer appearance and creating custom add-ons on Red Hat Enterprise Linux 8 Last Updated: 2021-09-29 Red Hat Enterprise Linux 8 Customizing Anaconda Changing the installer appearance and creating custom add-ons on Red Hat Enterprise Linux 8 Legal Notice Copyright © 2021 Red Hat, Inc. The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/ . In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries. Node.js ® is an official trademark of Joyent.
    [Show full text]
  • The Ongoing Evolution of Ext4 File System
    The Ongoing Evolution of Ext4 file system New features and Performance Enhancements Red Hat Luk´aˇsCzerner October 24, 2011 Copyright © 2011 Luk´aˇsCzerner, Red Hat. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the COPYING file. Part I Statistics Agenda 1 Who works on Ext4? 2 Lines of code Who works on Ext4? Agenda 1 Who works on Ext4? 2 Lines of code Who works on Ext4? Last year of Ext4 development 250 non merge changes from 72 developers 9 developers has at least 10 commits 8512 lines of code inserted 5675 lined of code deleted Who works on Ext4? Comparison with other local file systems File system Number of commits Developers Developers* Ext4 250 72 9 Ext3 95 34 2 Xfs 294 34 4 Btrfs 506 60 11 550 Number of commits 500 Developers 450 400 350 300 250 Duration [s] 200 150 100 50 0 EXT3 EXT4 XFS BTRFS File system Lines of code Agenda 1 Who works on Ext4? 2 Lines of code Lines of code Development of the number of lines 80000 Ext3 Xfs Ext4 70000 Btrfs 60000 50000 40000 Lines of code 30000 20000 10000 01/01/05 01/01/06 01/01/07 01/01/08 01/01/09 01/01/10 01/01/11 01/01/12 Part II What's new in Ext4? Agenda 3 Faster file system creation 4 Discard support 5 Support for file systems beyond 16TB 6 Punch hole support 7 Scalability improvements 8 Clustered allocation Faster file system creation
    [Show full text]
  • NOVA: the Fastest File System for Nvdimms
    NOVA: The Fastest File System for NVDIMMs Steven Swanson, UC San Diego XFS F2FS NILFS EXT4 BTRFS © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Disk-based file systems are inadequate for NVMM Disk-based file systems cannot 1-Sector 1-Block N-Block 1-Sector 1-Block N-Block Atomicity overwrit overwrit overwrit append append append exploit NVMM performance e e e Ext4 ✓ ✗ ✗ ✗ ✗ ✗ wb Ext4 ✓ ✓ ✗ ✓ ✗ ✓ Performance optimization Order Ext4 ✓ ✓ ✓ ✓ ✗ ✓ compromises consistency on Dataj system failure [1] Btrfs ✓ ✓ ✓ ✓ ✗ ✓ xfs ✓ ✓ ✗ ✓ ✗ ✓ Reiserfs ✓ ✓ ✗ ✓ ✗ ✓ [1] Pillai et al, All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications, OSDI '14. © 2017 SNIA Persistent Memory Summit. All Rights Reserved. BPFS SCMFS PMFS Aerie EXT4-DAX XFS-DAX NOVA M1FS © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Previous Prototype NVMM file systems are not strongly consistent DAX does not provide data ATomic Atomicity Metadata Data Snapshot atomicity guarantee Mmap [1] So programming is more BPFS ✓ ✓ [2] ✗ ✗ difficult PMFS ✓ ✗ ✗ ✗ Ext4-DAX ✓ ✗ ✗ ✗ Xfs-DAX ✓ ✗ ✗ ✗ SCMFS ✗ ✗ ✗ ✗ Aerie ✓ ✗ ✗ ✗ © 2017 SNIA Persistent Memory Summit. All Rights Reserved. Ext4-DAX and xfs-DAX shortcomings No data atomicity support Single journal shared by all the transactions (JBD2- based) Poor performance Development teams are (rightly) “disk first”. © 2017 SNIA Persistent Memory Summit. All Rights Reserved. NOVA provides strong atomicity guarantees 1-Sector 1-Sector 1-Block 1-Block N-Block N-Block Atomicity Atomicity Metadata Data Mmap overwrite append overwrite append overwrite append Ext4 ✓ ✗ ✗ ✗ ✗ ✗ BPFS ✓ ✓ ✗ wb Ext4 ✓ ✓ ✗ ✓ ✗ ✓ PMFS ✓ ✗ ✗ Order Ext4 ✓ ✓ ✓ ✓ ✗ ✓ Ext4-DAX ✓ ✗ ✗ Dataj Btrfs ✓ ✓ ✓ ✓ ✗ ✓ Xfs-DAX ✓ ✗ ✗ xfs ✓ ✓ ✗ ✓ ✗ ✓ SCMFS ✗ ✗ ✗ Reiserfs ✓ ✓ ✗ ✓ ✗ ✓ Aerie ✓ ✗ ✗ © 2017 SNIA Persistent Memory Summit. All Rights Reserved.
    [Show full text]
  • I/O Stack Optimization for Smartphones
    I/O Stack Optimization for Smartphones Sooman Jeong1, Kisung Lee2,*, Seongjin Lee1, Seoungbum Son2,*, and Youjip Won1 1 Hanyang University, Seoul, Korea 2Samsung Electronics, Suwon, Korea Abstract ing devices for a variety of applications, including so- The Android I/O stack consists of elaborate and mature cial network services, games, cameras, camcorders, mp3 components (SQLite, the EXT4 filesystem, interrupt- players, and web browsers. driven I/O, and NAND-based storage) whose integrated The application performance of a smartphone is not behavior is not well-orchestrated, which leaves a sub- governed by the speed of its airlink, e.g., Wi-Fi, but stantial room for an improvement. We overhauled the rather by the storage performance, which is currently uti- block I/O behavior of five filesystems (EXT4, XFS, lized in a quite inefficient manner [11]. Furthermore, BTRFS, NILFS, and F2FS) under each of the five dif- one of the main sources of this inefficiency is an ex- ferent journaling modes of SQLite. We found that the cessive I/O activity caused by uncoordinated interac- most significant inefficiency originates from the fact that tions between EXT4 journaling and SQLite journaling filesystem journals the database journaling activity; we [14]. Despite its significant implications for the overall refer to this as the JOJ (Journaling of Journal) anomaly. smartphone performance, the I/O subsystem behavior of The JOJ overhead compounds in EXT4 when the bulky smartphones has not been studied nearly as thoroughly as EXT4 journaling activity is triggered by an fsync() call those in enterprise servers [26, 23], web servers [4, 10], from SQLite.
    [Show full text]
  • Android Boot Optimization on Dra7xx Devices (Rev. A)
    Application Report SPRAC30A–January 2016–Revised February 2018 Android Boot Optimization on DRA7xx Devices ABSTRACT Boot time optimizations are critical for achieving the perception of immediate application availability in the Automotive Infotainment system. They impact the end user's experience by improving the launch time of applications and infotainment use-cases. This application report captures the details on how to optimize Android software stack, along with a few optional early infotainment system features, and is meant to be used as a reference guide. The end product owner (OEM/ODM/Customer) can review the guidance for improving boot time for their specific Android powered system. Contents 1 Introduction ................................................................................................................... 2 2 Build Instructions............................................................................................................. 3 3 Deploying Instructions....................................................................................................... 5 4 Tooling/Profiling .............................................................................................................. 6 5 Optimization Steps........................................................................................................... 7 6 Adding Early Infotainment (IVI) Features to the Android System ..................................................... 8 7 Recommended Userspace Optimizations ..............................................................................
    [Show full text]