Operating Systems Software Eng

Total Page:16

File Type:pdf, Size:1020Kb

Operating Systems Software Eng TCSS 422 B – Spring 2020 6/4/2020 School of Engineering and Technology FALL 2020 - TCSS 562 TCSS 422: OPERATING SYSTEMS SOFTWARE ENG. FOR CLOUD COMPUTING This is a “cloud computing” course Previous year’s course: http://faculty.washington.edu/wlloyd/courses/tcss562 Course introduces major cloud computing delivery models: HDDs, RAID, File Systems Infrastructure-as-a-Service (IaaS), Platform (PaaS), Functions Final Exam Review (FaaS), Container (CaaS), Software (SaaS) Course features a software development project where we Wes J. Lloyd build and evaluate software entirely for the cloud Fall 2019 focus: developing serverless software: e.g. data School of Engineering and Technology processing pipelines University of Washington - Tacoma Fall 2020 focus: serverless/cloud-native software, containerization, cloud services TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 June 4, 2020 L19.2 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma NEXT YEAR - TCSS 498/499 (ANY QUARTER) TCSS 562 – CLOUD COMPUTING - 2 UNDERGRADUATE READIN/RESEARCH IN CSS Class does not have prerequisites Independent study in “cloud computing” TCSS 422 provides good foundation – we use Linux Work to collaboratively draft a proposal and submit to Dr. Chinn, CSS Chair for Approval If interested in enrolling, contact by email Focus on variety of topics related to cloud/distributed systems Can take 1 x 500-level class, counts as 400-level elective Variable credits from 1 to 5 . SAVINGS: able to take graduate course and only pay Involves participation in weekly research group meeting undergraduate tuition . Spring 2020: currently Wednesday at 12:30p DOUBLE-DIP !! Usually 1 or 2 one-on-one or small group meeting during week . Class taken in last quarter of undergrad can be used twice Contact by email if interested . Once as a undergraduate elective towards graduation Identify preferred quarter(s) . Once as a graduate elective towards the Masters in Number of credits Computer Science & Systems (MSCSS) degree Can take up to 10 credits TCSS 498/499 - CSS elective credits TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.3 June 4, 2020 L19.4 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma COURSE EVALUATION: TCSS 422 B OBJECTIVES – 6/4 SPRING 2020 Please complete the course evaluation survey at: Questions from 6/2 TCSS 422 B - Computer Operating Systems: Tutorial 2 (pthreads, locks, conditions) – due Thurs June 4 https://uwt.iasystem.org/survey/106940 Assignment 3 – on Linux kernel programming – New this quarter: offered in “tutorial” format – due Sat June 13 Assignment 2– available in Java or C Quiz 4 – Page Tables – optional – provides practice Tutorial 2– parallel prime number generation problems – to be posted Assignment 3– Kernel Module programming- tutorial format Chapter 37: Hard Disk Drives Course entirely online & recorded Chapter 38: RAID (Redundant array of inexpensive disks)- very brief Paperless daily feedback surveys Chapter 39/40: File Systems – very brief Quizzes with multiple attempts nd Problem solutions w/ document cam Practice Final Exam Questions – Today – 2 hour Open book, note, and internet midterm and final exam Slide revisions & refactoring for 100% online delivery TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.5 June 4, 2020 L19.6 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma Slides by Wes J. Lloyd L19.1 TCSS 422 B – Spring 2020 6/4/2020 School of Engineering and Technology MATERIAL / PACE FEEDBACK FROM 6/2 Please classify your perspective on material covered in today’s class (35 respondents): 1-mostly review, 5-equal new/review, 10-mostly new Average – 6.44 ( from 7.3) Please rate the pace of today’s class: 1-slow, 5-just right, 10-fast Average – 5.56 ( from 5.83) TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.7 June 4, 2020 L19.8 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma OBJECTIVES – 6/4 OBJECTIVES – 6/4 Questions from 6/2 Questions from 6/2 Tutorial 2 (pthreads, locks, conditions) – due Thurs June 4 Tutorial 2 (pthreads, locks, conditions) – due Thurs June 4 Assignment 3 – on Linux kernel programming – Assignment 3 – on Linux kernel programming – offered in “tutorial” format – due Sat June 13 offered in “tutorial” format – due Sat June 13 Quiz 4 – Page Tables – optional – provides practice Quiz 4 – Page Tables – optional – provides practice problems – to be posted problems – to be posted Chapter 37: Hard Disk Drives Chapter 37: Hard Disk Drives Chapter 38: RAID (Redundant array of inexpensive disks)- very brief Chapter 38: RAID (Redundant array of inexpensive disks)- very brief Chapter 39/40: File Systems – very brief Chapter 39/40: File Systems – very brief Practice Final Exam Questions – Today – 2 nd hour Practice Final Exam Questions – Today – 2 nd hour TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.9 June 4, 2020 L19.10 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma OBJECTIVES – 6/4 Questions from 6/2 Tutorial 2 (pthreads, locks, conditions) – due Thurs June 4 Assignment 3 – on Linux kernel programming – offered in “tutorial” format – due Sat June 13 Quiz 4 – Page Tables – optional – provides practice problems – to be posted CH. 37: Chapter 37: Hard Disk Drives HARD DISK DRIVES Chapter 38: RAID (Redundant array of inexpensive disks)- very brief Chapter 39/40: File Systems – very brief Practice Final Exam Questions – Today – 2 nd hour TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.11 June 4, 2020 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma L19.12 Slides by Wes J. Lloyd L19.2 TCSS 422 B – Spring 2020 6/4/2020 School of Engineering and Technology OBJECTIVES – 6/4 OBJECTIVES Questions from 6/2 Chapter 37 Tutorial 2 (pthreads, locks, conditions) – due Thurs June 4 Assignment 3 – on Linux kernel programming – . HDD Internals offered in “tutorial” format – due Sat June 13 . Seek time Quiz 4 – Page Tables – optional – provides practice problems – to be posted . Rotational latency Chapter 37: Hard Disk Drives Chapter 38: RAID (Redundant array of inexpensive disks)- very brief . Transfer speed Chapter 39/40: File Systems – very brief . Capacity Practice Final Exam Questions – Today – 2 nd hour . Scheduling algorithms TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.13 June 4, 2020 L19.14 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma EXAMPLE: SIMPLE DISK DRIVE HARD DISK STRUCTURE Single track disk Head: one per surface of drive Arm: moves heads across surface of platters TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.15 June 4, 2020 L19.16 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma SINGLE-TRACK LATENCY: SEEK TIME THE ROTATIONAL DELAY Rotational latency (Trotation): time to rotate to desired sector Average Trotation is ~ about half the time of a full rotation How to calculate Trotation from rpm 1. Calculate time for 1 rotation based on rpm > Convert rpm to rps 2. Divide by two (average rotational latency) 7200rpm = 8.33ms per rotation /2= ~4.166ms Seek time (Tseek): time to move disk arm to proper track 10000rpm = 6ms per rotation /2= ~3ms Most time consuming HDD operation 15000rpm = 4ms per rotation /2= ~2ms TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.17 June 4, 2020 L19.18 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma Slides by Wes J. Lloyd L19.3 TCSS 422 B – Spring 2020 6/4/2020 School of Engineering and Technology FOUR PHASES OF SEEK HDD I/O Acceleration coasting deceleration settling Data transfer .Final phase of I/O: time to read or write to disk Acceleration: the arm gets moving surface Coasting: arm moving at full speed Complete I/O cycle: Deceleration: arm slow down 1. Seek (accelerate, coast, decelerate, settle) 2. Wait on rotational latency (until track aligns) Settling: Head is carefully positioned over track 3. Data transfer . Settling time is often high, from .5 to 2ms TCSS422: Operating Systems [Spring 2020] TCSS422: Operating Systems [Spring 2020] June 4, 2020 L19.19 June 4, 2020 L19.20 School of Engineering and Technology, University of Washington - Tacoma School of Engineering and Technology, University of Washington - Tacoma TRACK SKEW TRACK SKEW - 2 Sectors are offset across tracks to allow time for head to reposition for sequential reads Without track skew, when head is repositioned sector would have already
Recommended publications
  • Copy on Write Based File Systems Performance Analysis and Implementation
    Copy On Write Based File Systems Performance Analysis And Implementation Sakis Kasampalis Kongens Lyngby 2010 IMM-MSC-2010-63 Technical University of Denmark Department Of Informatics Building 321, DK-2800 Kongens Lyngby, Denmark Phone +45 45253351, Fax +45 45882673 [email protected] www.imm.dtu.dk Abstract In this work I am focusing on Copy On Write based file systems. Copy On Write is used on modern file systems for providing (1) metadata and data consistency using transactional semantics, (2) cheap and instant backups using snapshots and clones. This thesis is divided into two main parts. The first part focuses on the design and performance of Copy On Write based file systems. Recent efforts aiming at creating a Copy On Write based file system are ZFS, Btrfs, ext3cow, Hammer, and LLFS. My work focuses only on ZFS and Btrfs, since they support the most advanced features. The main goals of ZFS and Btrfs are to offer a scalable, fault tolerant, and easy to administrate file system. I evaluate the performance and scalability of ZFS and Btrfs. The evaluation includes studying their design and testing their performance and scalability against a set of recommended file system benchmarks. Most computers are already based on multi-core and multiple processor architec- tures. Because of that, the need for using concurrent programming models has increased. Transactions can be very helpful for supporting concurrent program- ming models, which ensure that system updates are consistent. Unfortunately, the majority of operating systems and file systems either do not support trans- actions at all, or they simply do not expose them to the users.
    [Show full text]
  • CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N
    CERIAS Tech Report 2017-5 Deceptive Memory Systems by Christopher N. Gutierrez Center for Education and Research Information Assurance and Security Purdue University, West Lafayette, IN 47907-2086 DECEPTIVE MEMORY SYSTEMS ADissertation Submitted to the Faculty of Purdue University by Christopher N. Gutierrez In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy December 2017 Purdue University West Lafayette, Indiana ii THE PURDUE UNIVERSITY GRADUATE SCHOOL STATEMENT OF DISSERTATION APPROVAL Dr. Eugene H. Spa↵ord, Co-Chair Department of Computer Science Dr. Saurabh Bagchi, Co-Chair Department of Computer Science Dr. Dongyan Xu Department of Computer Science Dr. Mathias Payer Department of Computer Science Approved by: Dr. Voicu Popescu by Dr. William J. Gorman Head of the Graduate Program iii This work is dedicated to my wife, Gina. Thank you for all of your love and support. The moon awaits us. iv ACKNOWLEDGMENTS Iwould liketothank ProfessorsEugeneSpa↵ord and SaurabhBagchi for their guidance, support, and advice throughout my time at Purdue. Both have been instru­ mental in my development as a computer scientist, and I am forever grateful. I would also like to thank the Center for Education and Research in Information Assurance and Security (CERIAS) for fostering a multidisciplinary security culture in which I had the privilege to be part of. Special thanks to Adam Hammer and Ronald Cas­ tongia for their technical support and Thomas Yurek for his programming assistance for the experimental evaluation. I am grateful for the valuable feedback provided by the members of my thesis committee, Professor Dongyen Xu, and Professor Math­ ias Payer.
    [Show full text]
  • Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C
    Membrane: Operating System Support for Restartable File Systems Swaminathan Sundararaman, Sriram Subramanian, Abhishek Rajimwale, Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau, Michael M. Swift Computer Sciences Department, University of Wisconsin, Madison Abstract and most complex code bases in the kernel. Further, We introduce Membrane, a set of changes to the oper- file systems are still under active development, and new ating system to support restartable file systems. Mem- ones are introduced quite frequently. For example, Linux brane allows an operating system to tolerate a broad has many established file systems, including ext2 [34], class of file system failures and does so while remain- ext3 [35], reiserfs [27], and still there is great interest in ing transparent to running applications; upon failure, the next-generation file systems such as Linux ext4 and btrfs. file system restarts, its state is restored, and pending ap- Thus, file systems are large, complex, and under develop- plication requests are serviced as if no failure had oc- ment, the perfect storm for numerous bugs to arise. curred. Membrane provides transparent recovery through Because of the likely presence of flaws in their imple- a lightweight logging and checkpoint infrastructure, and mentation, it is critical to consider how to recover from includes novel techniques to improve performance and file system crashes as well. Unfortunately, we cannot di- correctness of its fault-anticipation and recovery machin- rectly apply previous work from the device-driver litera- ery. We tested Membrane with ext2, ext3, and VFAT. ture to improving file-system fault recovery. File systems, Through experimentation, we show that Membrane in- unlike device drivers, are extremely stateful, as they man- duces little performance overhead and can tolerate a wide age vast amounts of both in-memory and persistent data; range of file system crashes.
    [Show full text]
  • Ext4 File System and Crash Consistency
    1 Ext4 file system and crash consistency Changwoo Min 2 Summary of last lectures • Tools: building, exploring, and debugging Linux kernel • Core kernel infrastructure • Process management & scheduling • Interrupt & interrupt handler • Kernel synchronization • Memory management • Virtual file system • Page cache and page fault 3 Today: ext4 file system and crash consistency • File system in Linux kernel • Design considerations of a file system • History of file system • On-disk structure of Ext4 • File operations • Crash consistency 4 File system in Linux kernel User space application (ex: cp) User-space Syscalls: open, read, write, etc. Kernel-space VFS: Virtual File System Filesystems ext4 FAT32 JFFS2 Block layer Hardware Embedded Hard disk USB drive flash 5 What is a file system fundamentally? int main(int argc, char *argv[]) { int fd; char buffer[4096]; struct stat_buf; DIR *dir; struct dirent *entry; /* 1. Path name -> inode mapping */ fd = open("/home/lkp/hello.c" , O_RDONLY); /* 2. File offset -> disk block address mapping */ pread(fd, buffer, sizeof(buffer), 0); /* 3. File meta data operation */ fstat(fd, &stat_buf); printf("file size = %d\n", stat_buf.st_size); /* 4. Directory operation */ dir = opendir("/home"); entry = readdir(dir); printf("dir = %s\n", entry->d_name); return 0; } 6 Why do we care EXT4 file system? • Most widely-deployed file system • Default file system of major Linux distributions • File system used in Google data center • Default file system of Android kernel • Follows the traditional file system design 7 History of file system design 8 UFS (Unix File System) • The original UNIX file system • Design by Dennis Ritche and Ken Thompson (1974) • The first Linux file system (ext) and Minix FS has a similar layout 9 UFS (Unix File System) • Performance problem of UFS (and the first Linux file system) • Especially, long seek time between an inode and data block 10 FFS (Fast File System) • The file system of BSD UNIX • Designed by Marshall Kirk McKusick, et al.
    [Show full text]
  • CS 152: Computer Systems Architecture Storage Technologies
    CS 152: Computer Systems Architecture Storage Technologies Sang-Woo Jun Winter 2019 Storage Used To be a Secondary Concern Typically, storage was not a first order citizen of a computer system o As alluded to by its name “secondary storage” o Its job was to load programs and data to memory, and disappear o Most applications only worked with CPU and system memory (DRAM) o Extreme applications like DBMSs were the exception Because conventional secondary storage was very slow o Things are changing! Some (Pre)History Magnetic core memory Rope memory (ROM) 1960’s Drum memory 1950~1970s 72 KiB per cubic foot! 100s of KiB (1024 bits in photo) Hand-woven to program the 1950’s Apollo guidance computer Photos from Wikipedia Some (More Recent) History Floppy disk drives 1970’s~2000’s 100 KiBs to 1.44 MiB Hard disk drives 1950’s to present MBs to TBs Photos from Wikipedia Some (Current) History Solid State Drives Non-Volatile Memory 2000’s to present 2010’s to present GB to TBs GBs Hard Disk Drives Dominant storage medium for the longest time o Still the largest capacity share Data organized into multiple magnetic platters o Mechanical head needs to move to where data is, to read it o Good sequential access, terrible random access • 100s of MB/s sequential, maybe 1 MB/s 4 KB random o Time for the head to move to the right location (“seek time”) may be ms long • 1000,000s of cycles! Typically “ATA” (Including IDE and EIDE), and later “SATA” interfaces o Connected via “South bridge” chipset Ding Yuan, “Operating Systems ECE344 Lecture 11: File
    [Show full text]
  • Filesystem Considerations for Embedded Devices ELC2015 03/25/15
    Filesystem considerations for embedded devices ELC2015 03/25/15 Tristan Lelong Senior embedded software engineer Filesystem considerations ABSTRACT The goal of this presentation is to answer a question asked by several customers: which filesystem should you use within your embedded design’s eMMC/SDCard? These storage devices use a standard block interface, compatible with traditional filesystems, but constraints are not those of desktop PC environments. EXT2/3/4, BTRFS, F2FS are the first of many solutions which come to mind, but how do they all compare? Typical queries include performance, longevity, tools availability, support, and power loss robustness. This presentation will not dive into implementation details but will instead summarize provided answers with the help of various figures and meaningful test results. 2 TABLE OF CONTENTS 1. Introduction 2. Block devices 3. Available filesystems 4. Performances 5. Tools 6. Reliability 7. Conclusion Filesystem considerations ABOUT THE AUTHOR • Tristan Lelong • Embedded software engineer @ Adeneo Embedded • French, living in the Pacific northwest • Embedded software, free software, and Linux kernel enthusiast. 4 Introduction Filesystem considerations Introduction INTRODUCTION More and more embedded designs rely on smart memory chips rather than bare NAND or NOR. This presentation will start by describing: • Some context to help understand the differences between NAND and MMC • Some typical requirements found in embedded devices designs • Potential filesystems to use on MMC devices 6 Filesystem considerations Introduction INTRODUCTION Focus will then move to block filesystems. How they are supported, what feature do they advertise. To help understand how they compare, we will present some benchmarks and comparisons regarding: • Tools • Reliability • Performances 7 Block devices Filesystem considerations Block devices MMC, EMMC, SD CARD Vocabulary: • MMC: MultiMediaCard is a memory card unveiled in 1997 by SanDisk and Siemens based on NAND flash memory.
    [Show full text]
  • Advfs Command Line and Application Programming Interface
    AdvFS Command Line and Application Programming Interface External Reference Specification Version 1.13 JA CASL Not Inspected/Date Inspected Building ZK3 110 Spit Brook Road Nashua, NH 03062 Copyright (C) 2008 Hewlett-Packard Development Company, L.P. Page 1 of 65 Preface Version 1.4 of the AdvFS Command Line and API External Reference Specification is being made available to all partners in order to allow them to design and implement code meeting the specifications contained herein. 1 INTRODUCTION ....................................................................................................................................................................4 1.1 ABSTRACT .........................................................................................................................................................................4 1.2 AUDIENCE ..........................................................................................................................................................................4 1.3 TERMS AND DEFINITIONS ...................................................................................................................................................4 1.4 RELATED DOCUMENTS ......................................................................................................................................................4 1.5 ACKNOWLEDGEMENTS ......................................................................................................................................................4
    [Show full text]
  • Petascale Data Management: Guided by Measurement
    Petascale Data Management: Guided by Measurement petascale data storage institute www.pdsi-scidac.org/ MPP2 www.pdsi-scidac.org MEMBER ORGANIZATIONS & Lustre • Los Alamos National Laboratory – institute.lanl.gov/pdsi/ • Parallel Data Lab, Carnegie Mellon University – www.pdl.cmu.edu/ • Oak Ridge National Laboratory – www.csm.ornl.gov/ • Sandia National Laboratories – www.sandia.gov/ • National Energy Research Scientific Computing Center • Center for Information Technology Integration, U. of Michigan pdsi.nersc.gov/ www.citi.umich.edu/projects/pdsi/ • Pacific Northwest National Laboratory – www.pnl.gov/ • University of California at Santa Cruz – www.pdsi.ucsc.edu/ The Computer Failure Data Repository Filesystems Statistics Survey • Goal: to collect and make available failure data from a large variety of sites GOALS • Better understanding of the characteristics of failures in the real world • Gather & build large DB of static filetree summary • Now maintained by USENIX at cfdr.usenix.org/ • Build small, non-invasive, anonymizing stats gather tool • Distribute fsstats tool via easily used web site Red Storm NAME SYSTEM TYPE SYSTEM SIZE TIME PERIOD TYPE OF DATA • Encourage contributions (output of tool) from many FSs Any node • Offer uploaded statistics & summaries to public & Lustre 22 HPC clusters 5000 nodes 9 years outage . Label Date Type File Total Size Total Space # files # dirs max size max space max dir max name avg file avg dir . 765 nodes (2008) System TB TB M K GB GB ents bytes MB ents . 1 HPC cluster 5 years PITTSBURGH 3,400 disks
    [Show full text]
  • Oracle Database Administrator's Reference for UNIX-Based Operating Systems
    Oracle® Database Administrator’s Reference 10g Release 2 (10.2) for UNIX-Based Operating Systems B15658-06 March 2009 Oracle Database Administrator's Reference, 10g Release 2 (10.2) for UNIX-Based Operating Systems B15658-06 Copyright © 2006, 2009, Oracle and/or its affiliates. All rights reserved. Primary Author: Brintha Bennet Contributing Authors: Kevin Flood, Pat Huey, Clara Jaeckel, Emily Murphy, Terri Winters, Ashmita Bose Contributors: David Austin, Subhranshu Banerjee, Mark Bauer, Robert Chang, Jonathan Creighton, Sudip Datta, Padmanabhan Ganapathy, Thirumaleshwara Hasandka, Joel Kallman, George Kotsovolos, Richard Long, Rolly Lv, Padmanabhan Manavazhi, Matthew Mckerley, Sreejith Minnanghat, Krishna Mohan, Rajendra Pingte, Hanlin Qian, Janelle Simmons, Roy Swonger, Lyju Vadassery, Douglas Williams This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this software or related documentation is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S.
    [Show full text]
  • A File System Level Snapshot in Ext4
    Computer Engineering and Intelligent Systems www.iiste.org ISSN 2222-1719 (Paper) ISSN 2222-2863 (Online) Vol 2, No.8, 2011 A File System Level Snapshot In Ext4 Uma Nagaraj Computer Engineering Department, Pune University, MAE, Alandi Pune, Maharashtra 412105, India E-mail: [email protected] Ganesh Patil Computer Engineering Department, Pune University, MAE, Alandi Pune, Maharashtra 412105, India E-mail: [email protected] Swapnil Gaikwad Computer Engineering Department, Pune University, MAE, Alandi Pune, Maharashtra 412105, India E-mail: [email protected] Akshay Nehe Computer Engineering Department, Pune University, MAE, Alandi Pune, Maharashtra 412105, India E-mail: [email protected] Ashish Mayekar Computer Engineering Department, Pune University, MAE, Alandi Pune, Maharashtra 412105, India E-mail: [email protected] Received: 2011-10-20 Accepted: 2011-10-29 Published:2011-11-04 Abstract Snapshot makes a copy of current working system. Creating a snapshot with some processing and overheads is important to provide the reliable data service during backup. There are two types of snapshot techniques: volume-based approach and system-based approach. The volume-based Logical Volume Manager provides compact space usability, but requires some space to save snapshot and makes system with extra overheads. The system-based Snapshot works better than volume based. The file system based Snapshot does not need reserve space for snapshot. Therefore, the system-based Snapshot is considered a good solution in the computer environment where large-scale space management capability is not that critical. However, such snapshot feature is available in Linux kernel 2.2, 2.6 in EXT3 file system. In this paper, we proposed a system-based snapshot for the ext4 system in Linux kernel 2.6 or higher.
    [Show full text]
  • State-Of-The-Art Garbage Collection Policies for NILFS2
    State-of-the-art Garbage Collection Policies for NILFS2 DIPLOMARBEIT zur Erlangung des akademischen Grades Diplom-Ingenieur im Rahmen des Studiums Software Engineering/Internet Computing eingereicht von Andreas Rohner, Bsc Matrikelnummer 0502196 an der Fakultät für Informatik der Technischen Universität Wien Betreuung: Ao.Univ.Prof. Dipl.-Ing. Dr.techn. M. Anton Ertl Wien, 23. Jänner 2018 Andreas Rohner M. Anton Ertl Technische Universität Wien A-1040 Wien Karlsplatz 13 Tel. +43-1-58801-0 www.tuwien.ac.at State-of-the-art Garbage Collection Policies for NILFS2 DIPLOMA THESIS submitted in partial fulfillment of the requirements for the degree of Diplom-Ingenieur in Software Engineering/Internet Computing by Andreas Rohner, Bsc Registration Number 0502196 to the Faculty of Informatics at the TU Wien Advisor: Ao.Univ.Prof. Dipl.-Ing. Dr.techn. M. Anton Ertl Vienna, 23rd January, 2018 Andreas Rohner M. Anton Ertl Technische Universität Wien A-1040 Wien Karlsplatz 13 Tel. +43-1-58801-0 www.tuwien.ac.at Erklärung zur Verfassung der Arbeit Andreas Rohner, Bsc Grundsteingasse 43/22 Hiermit erkläre ich, dass ich diese Arbeit selbständig verfasst habe, dass ich die verwen- deten Quellen und Hilfsmittel vollständig angegeben habe und dass ich die Stellen der Arbeit – einschließlich Tabellen, Karten und Abbildungen –, die anderen Werken oder dem Internet im Wortlaut oder dem Sinn nach entnommen sind, auf jeden Fall unter Angabe der Quelle als Entlehnung kenntlich gemacht habe. Wien, 23. Jänner 2018 Andreas Rohner v Danksagung Ich danke meinem Betreuer Ao.Univ.Prof. Dipl.-Ing. Dr.techn. M. Anton Ertl für seine Geduld und Unterstützung. Außerdem schulde ich Ryusuke Konishi, dem Maintainer des NILFS2-Subsystems im Linux-Kernel, Dank für die Durchsicht meiner Patches und die vielen wertvollen Verbesserungsvorschläge.
    [Show full text]
  • Performance and Scalability of a Sensor Data Storage Framework
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Aaltodoc Publication Archive Aalto University School of Science Degree Programme in Computer Science and Engineering Paul Tötterman Performance and Scalability of a Sensor Data Storage Framework Master’s Thesis Helsinki, March 10, 2015 Supervisor: Associate Professor Keijo Heljanko Advisor: Lasse Rasinen M.Sc. (Tech.) Aalto University School of Science ABSTRACT OF Degree Programme in Computer Science and Engineering MASTER’S THESIS Author: Paul Tötterman Title: Performance and Scalability of a Sensor Data Storage Framework Date: March 10, 2015 Pages: 67 Major: Software Technology Code: T-110 Supervisor: Associate Professor Keijo Heljanko Advisor: Lasse Rasinen M.Sc. (Tech.) Modern artificial intelligence and machine learning applications build on analysis and training using large datasets. New research and development does not always start with existing big datasets, but accumulate data over time. The same storage solution does not necessarily cover the scale during the lifetime of the research, especially if scaling up from using common workgroup storage technologies. The storage infrastructure at ZenRobotics has grown using standard workgroup technologies. The current approach is starting to show its limits, while the storage growth is predicted to continue and accelerate. Successful capacity planning and expansion requires a better understanding of the patterns of the use of storage and its growth. We have examined the current storage architecture and stored data from different perspectives in order to gain a better understanding of the situation. By performing a number of experiments we determine key properties of the employed technologies. The combination of these factors allows us to make informed decisions about future storage solutions.
    [Show full text]