Ruprecht-Karls-Universität Heidelberg Kirchhoff

Total Page:16

File Type:pdf, Size:1020Kb

Ruprecht-Karls-Universität Heidelberg Kirchhoff RUPRECHT-KARLS-UNIVERSITÄT HEIDELBERG KIRCHHOFF-INSTITUT FÜR PHYSIK INAUGURAL - DISSERTATION zur Erlangung der Doktorwürde der Naturwissenschaftlich-Mathematischen Gesamtfakultät der Ruprecht-Karls-Universität Heidelberg vorgelegt von Diplom–Physiker Arne Wiebalck aus Bremerhaven Tag der mündlichen Prüfung: 29. Juni 2005 ClusterRAID: Architecture and Prototype of a Distributed Fault-Tolerant Mass Storage System for Clusters Gutachter: Prof. Dr. Volker Lindenstruth Gutachter: Prof. Dr. Thomas Ludwig ClusterRAID: Architektur und Prototyp eines verteilten fehlertoleranten Massenspeicher-Systems für Cluster In den letzten Jahren haben sich Cluster aus Standard-Komponenten in vielen Bereichen als do- minante Architektur für Hochleistungsrechner durchgesetzt. Wegen ihres besseren Preis-Leistungs- verhältnisses haben diese Systeme, die typischerweise aus Standard-PCs oder Workstations und ei- nem Verbindungsnetzwerk aufgebaut sind, die traditionell verwendeten, integrierten Supercomputer- Architekturen verdrängt. Aufgrund des zu beobachtenden Paradigmen-Wechsels von rein rechen- intensiven hin zu Eingabe/Ausgabe-intensiven Anwendungen werden die in Clustern verwendeten Massenspeichersysteme zu einer immer wichtigeren Komponente. Daß sich bisher kein Standard für die Nutzung des verteilten Massenspeichers in Clustern durchsetzen konnte, ist vor allem der inhä- renten Unzuverlässigkeit der zugrundeliegenden Komponenten zuzuschreiben. Die vorliegende Arbeit beschreibt die Architektur und eine Prototypen-Implementierung eines ver- teilten, fehlertoleranten Massenspeichersystems für Cluster. Die grundlegende Idee der Architektur ist es, die lokale Festplatte eines Clusterknotens zuverlässig zu machen, ohne dabei die Schnittstel- le für das Betriebssystem oder die Anwendung zu verändern. Hierbei werden fehler-korrigierende Codes eingesetzt, die es ermöglichen, die Anzahl der zu tolerierenden Fehler und somit die Zuver- lässigkeit des Gesamtsystems einzustellen. Das Anordnungsschema für die Datenblöcke innerhalb des Systems berücksichtigt das Zugriffsverhalten einer ganzen Klasse von Applikationen und kann so die erforderlichen Netzwerkzugriffe auf ein Minimum reduzieren. Gründliche Messungen und Funktionstests des Prototypen, sowohl allein als auch im Zusammenwirken mit lokalen und verteil- ten Dateisystemen, belegen die Validität des Konzeptes. ClusterRAID: Architecture and Prototype of a Distributed Fault-Tolerant Mass Storage System for Clusters During the past few years clusters built from commodity off-the-shelf (COTS) components have emerged as the predominant supercomputer architecture. Typically comprising a collection of stan- dard PCs or workstations and an interconnection network, they have replaced the traditionally used integrated systems due to their better price/performance ratio. As paradigms shift from mere com- puting intensive to I/O intensive applications, mass storage solutions for cluster installations become a more and more crucial aspect of these systems. The inherent unreliability of the underlying com- ponents is one of the reasons why no system has been established as a standard storage solution for clusters yet. This thesis sets out the architecture and prototype implementation of a novel distributed mass stor- age system for commodity off-the-shelf clusters and addresses the issue of the unreliable constituent components. The key concept of the presented system is the conversion of the local hard disk drive of a cluster node into a reliable device while preserving the block device interface. By the deploy- ment of sophisticated erasure-correcting codes, the system allows the adjustment of the number of tolerable failures and thus the overall reliability. In addition, the applied data layout considers the access behaviour of a broad range of applications and minimizes the number of required network transactions. Extensive measurements and functionality tests of the prototype, both stand-alone and in conjunction with local or distributed file systems, show the validity of the concept. Contents 1. Introduction 15 1.1. Commodity Components in Clusters . 16 1.1.1. Hard Disk Drives . 16 1.1.2. Processors and Memory . 17 1.1.3. Networks and Busses . 17 1.1.4. Cluster Software . 18 1.2. Requirements in Science and Industry . 18 1.2.1. High Energy Physics Experiments . 19 1.2.2. The Google Web Search Engine . 19 1.3. Organization of the Thesis . 20 2. Distributed Storage for Clusters 21 2.1. SAN, NAS, and RAID . 21 2.2. Networked File Systems . 24 2.2.1. NFS . 24 2.2.2. DAFS . 25 2.2.3. MOSIX, MFS, and MOPI . 26 2.2.4. PVFS and Lustre . 27 2.2.5. GPFS . 28 2.2.6. Google File System . 29 2.3. Integrated Mass Storage Systems . 30 2.3.1. HPSS . 30 2.3.2. CASTOR . 31 2.3.3. Storage Tank . 32 2.4. Distributed Block Level Systems . 33 2.4.1. NBD and ENBD . 34 2.4.2. DRBD . 36 2.4.3. RAID-X OSM . 37 2.4.4. Petal . 38 2.4.5. Tertiary Disk . 39 2.4.6. DRAID . 39 2.5. Distributed RAID . 40 2.6. The need for something different . 44 7 Contents 3. ClusterRAID Architecture 47 3.1. Design Considerations . 47 3.2. Overview and Functionality . 48 3.3. Performance . 56 3.4. Scalability . 62 3.5. Fault-tolerance . 63 3.5.1. Basic definitions . 63 3.5.2. Reliability Modeling . 65 3.5.3. The Reliability of the ClusterRAID . 67 4. Error and Erasure Correcting Codes 73 4.1. Introduction . 73 4.2. Hamming Codes . 78 4.3. BCH Codes . 79 4.4. Reed-Solomon Codes . 80 4.5. Vandermonde-based Reed-Solomon Codes . 83 4.6. Other Algorithms . 86 5. ClusterRAID Implementation 87 5.1. Kernel Interfacing . 87 5.2. Overview . 90 5.2.1. The Main Module . 92 5.2.2. The Codec Module . 97 5.2.3. The XOR Module . 102 5.3. Functional Analysis . 102 5.3.1. Read Requests . 103 5.3.2. Write Requests . 104 5.3.3. Degraded Mode . 106 5.4. Additional Concepts . 109 5.4.1. Locking . 109 5.4.2. Copy Mode . 111 5.4.3. Kernel Caches . 111 5.4.4. Private Caches . 112 5.4.5. Check Blocks . 112 5.5. Monitoring, Debugging, Control, and Configuration . 113 5.5.1. Monitoring . 113 5.5.2. Driver Control and Configuration . 115 5.5.3. ClusterRAID Configuration and Startup . 116 6. Prototyping and Benchmarks 121 6.1. The Test Environment . 121 6.1.1. The Hard Disks . 121 6.1.2. The Network . 124 6.1.3. The Network Block Device . 128 8 Contents 6.2. ClusterRAID Performance . 128 6.2.1. Local Testing . 129 6.2.2. Remote Testing . 131 6.2.3. System Tests . 132 6.3. Functional Testing . 140 6.3.1. Online Repair and Node Replacement . 141 6.3.2. Stability and Consistency . 141 6.3.3. ClusterRAID and MFS . 143 6.4. Implemented Optimizations and Possible Improvements . 146 6.4.1. Software . 146 6.4.2. Hardware Support . 148 7. Conclusion and Outlook 151 A. Appendix 153 A.1. The Galois Field GF(24) . 153 A.2. A Production Quality Distributed RAID . 153 A.3. The DWARW Module . 155 A.4. The NetIO Network Benchmark . 155 A.5. MOSIX related Measurements . 156 Bibliography 159 9 List of Figures 2.1. SAN schematic . ..
Recommended publications
  • Linux for Zseries: Device Drivers and Installation Commands (March 4, 2002) Summary of Changes
    Linux for zSeries Device Drivers and Installation Commands (March 4, 2002) Linux Kernel 2.4 LNUX-1103-07 Linux for zSeries Device Drivers and Installation Commands (March 4, 2002) Linux Kernel 2.4 LNUX-1103-07 Note Before using this document, be sure to read the information in “Notices” on page 207. Eighth Edition – (March 2002) This edition applies to the Linux for zSeries kernel 2.4 patch (made in September 2001) and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright International Business Machines Corporation 2000, 2002. All rights reserved. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Summary of changes .........v Chapter 5. Linux for zSeries Console || Edition 8 changes.............v device drivers............27 Edition 7 changes.............v Console features .............28 Edition 6 changes ............vi Console kernel parameter syntax .......28 Edition 5 changes ............vi Console kernel examples ..........28 Edition 4 changes ............vi Usingtheconsole............28 Edition 3 changes ............vii Console – Use of VInput ..........30 Edition 2 changes ............vii Console limitations ............31 About this book ...........ix Chapter 6. Channel attached tape How this book is organized .........ix device driver ............33 Who should read this book .........ix Tapedriverfeatures...........33 Assumptions..............ix Tape character device front-end........34 Tape block
    [Show full text]
  • Virtualization Guide Virtualization Guide SUSE Linux Enterprise Server 15 SP2
    SUSE Linux Enterprise Server 15 SP2 Virtualization Guide Virtualization Guide SUSE Linux Enterprise Server 15 SP2 Describes virtualization technology in general, and introduces libvirt—the unied interface to virtualization—and detailed information on specic hypervisors. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Manual xvi 1 Available Documentation xvi 2 Giving Feedback xviii 3 Documentation Conventions xix 4 Product Life Cycle and Support xx Support Statement for SUSE Linux Enterprise Server xxi • Technology Previews
    [Show full text]
  • Mesh Lab Manual
    MESH LAB MANUAL Mesh Lab Manual (Linux) Version 0.3 May 2009 Written by David Johnson and Kevin Duff PAGE 1 OF 43 MESH LAB MANUAL Table of Contents 1 NOTATION USED IN THIS LAB MANUAL.............................................................4 2 GETTING STARTED................................................................................................5 2.1 ACCESS TO THE MESH SERVER.................................................................................5 2.2 ETIQUETTE AND SHARING THE MESH...........................................................................5 2.2.1 Checking Who Else is Logged Into The Server..........................................5 2.2.2 Sending an Instant Message to Other Users..............................................6 2.2.3 Modifying the Login Message......................................................................6 3 DESCRIPTION OF THE LAB SETUP.....................................................................7 4 ARCHITECTURE OF THE LABORATORY..........................................................10 4.1 THE FREE BSD SERVER (172.20.1.1)...................................................................11 4.2 THE LINUX SERVER (UBUNTU) (172.20.1.4)............................................................11 4.3 THE 3COM SWITCH (172.20.1.2).......................................................................12 4.4 THE POWER DISTRIBUTION UNIT (PDU) (172.20.1.3)...............................................12 5 HOW THE NODES BOOT THEIR OPERATING SYSTEM...................................13
    [Show full text]
  • Openstack Virtual Machine Image Guide Current (2014-04-26) Copyright © 2013 Openstack Foundation Some Rights Reserved
    TM docs.openstack.org VM Image Guide April 26, 2014 current OpenStack Virtual Machine Image Guide current (2014-04-26) Copyright © 2013 OpenStack Foundation Some rights reserved. This guide describes how to obtain, create, and modify virtual machine images that are compatible with OpenStack. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. http://creativecommons.org/licenses/by/3.0/legalcode ii VM Image Guide April 26, 2014 current Table of Contents Preface ............................................................................................................................ 6 Conventions ............................................................................................................ 6 Document change history ....................................................................................... 6 1. Introduction ................................................................................................................ 1 Disk and container formats for images .................................................................... 3 Image metadata ..................................................................................................... 4 2. Get images .................................................................................................................. 6 CirrOS (test) images ................................................................................................ 6 Official Ubuntu images ...........................................................................................
    [Show full text]
  • Toward Decomposing the Linux Kernel
    LIGHTWEIGHT CAPABILITY DOMAINS: TOWARD DECOMPOSING THE LINUX KERNEL by Charles Jacobsen A thesis submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Master of Science in Computer Science School of Computing The University of Utah December 2016 Copyright c Charles Jacobsen 2016 All Rights Reserved The University of Utah Graduate School STATEMENT OF THESIS APPROVAL The thesis of Charles Jacobsen has been approved by the following supervisory committee members: Anton Burtsev , Chair 06/13/2016 Date Approved Zvonimir Rakamaric , Member 06/10/2016 Date Approved Ryan Stutsman , Member 06/13/2016 Date Approved and by Ross Whitaker , Chair/Dean of the Department/College/School of Computing and by David B. Kieda, Dean of The Graduate School. ABSTRACT Many of the operating system kernels we use today are monolithic. They consist of numerous file systems, device drivers, and other subsystems interacting with no isolation and full trust. As a result, a vulnerability or bug in one part of a kernel can compromise an entire machine. Our work is motivated by the following observations: (1) introducing some form of isolation into the kernel can help confine the effects of faulty code, and (2) modern hardware platforms are better suited for a decomposed kernel than platforms of the past. Platforms today consist of numerous cores, large nonuniform memories, and processor interconnects that resemble a miniature distributed system. We argue that kernels and hypervisors must eventually evolve beyond their current symmetric mulitprocessing (SMP) design toward a corresponding distributed design. But the path to this goal is not easy.
    [Show full text]
  • QLOOP Linux Driver to Mount QCOW2 Virtual Disks
    QLOOP Linux driver to mount QCOW2 virtual disks Francesc Zacarias Ribot [email protected] Facultat d’Informatica de Barcelona June 23, 2010 2 Contents 1 Introduction 9 1.1 Motivation .............................. 9 1.2 Goals ................................. 10 2 State of the Art 13 2.1 FUSE ................................. 13 2.2 libguestfs ............................... 13 2.3 qemu-nbd ............................... 15 2.4 Converttorawformat . .. 17 3 Background 19 3.1 BlockDevices............................. 19 3.1.1 Brief introduction to Linux device drivers . 19 3.1.2 Blockdeviceaccess. 20 3.2 TheLoopModule .......................... 29 3.3 IntroductiontoQEMUandKVM . 33 3.3.1 QEMU-QuickEmulator . 33 3.3.2 KVM-Kernel-basedVirtualMachine . 34 3.4 QCOW2FileFormat......................... 34 3.4.1 Addressing .......................... 35 3.4.2 Snapshots........................... 36 3.4.3 Header............................. 37 4 Implementation 39 4.1 Structures............................... 39 4.2 Functions ............................... 40 3 4 CONTENTS 4.3 Workflow............................... 41 5 Performance Tests 43 6 Time and cost analysis 47 6.1 Time.................................. 47 6.2 Costs.................................. 50 7 Future Work 51 7.1 Performance.............................. 51 7.2 MoreFeatures............................. 52 7.3 Otherformats............................. 52 7.4 UpstreamMerge ........................... 52 8 Conclusions 55 A Compiling qloop 59 B Usage Example 61 C Tools
    [Show full text]
  • Automatic Parallelism Tuning Mechanism for Heterogeneous IP-SAN Protocols in Long-Fat Networks
    Information and Media Technologies 8(3): 739-748 (2013) reprinted from: Journal of Information Processing 21(3): 423-432 (2013) © Information Processing Society of Japan Regular Paper Automatic Parallelism Tuning Mechanism for Heterogeneous IP-SAN Protocols in Long-fat Networks Takamichi Nishijima1,a) Hiroyuki Ohsaki1,b) Makoto Imase1,c) Received: October 22, 2012, Accepted: March 1, 2013 Abstract: In this paper we propose Block Device Layer with Automatic Parallelism Tuning (BDL-APT), a mecha- nism that maximizes the goodput of heterogeneous IP-based Storage Area Network (IP-SAN) protocols in long-fat networks. BDL-APT parallelizes data transfer using multiple IP-SAN sessions at a block device layer on an IP-SAN client, automatically optimizing the number of active IP-SAN sessions according to network status. A block device layer is a layer that receives read/write requests from an application or a file system, and relays those requests to a stor- age device. BDL-APT parallelizes data transfer by dividing aggregated read/write requests into multiple chunks, then transferring a chunk of requests on every IP-SAN session in parallel. BDL-APT automatically optimizes the number of active IP-SAN sessions based on the monitored network status using our parallelism tuning mechanism. We evaluate the performance of BDL-APT with heterogeneous IP-SAN protocols (NBD, GNBD and iSCSI) in a long-fat network. We implement BDL-APT as a layer of the Multiple Device driver, one of major software RAID implementations in- cluded in the Linux kernel. Through experiments, we demonstrate the effectiveness of BDL-APT with heterogeneous IP-SAN protocols in long-fat networks regardless of protocol specifics.
    [Show full text]
  • Storage & Performance Graphs
    Storage & Performance Graphs System and Network Administration Revision 2 (2020/21) Pierre-Philipp Braun <[email protected]> Table of contents ▶ RAID & Volume Managers ▶ Old-school SAN ▶ Distributed Storage ▶ Performance Tools & Graphs RAID & Volume Managers What matters more regarding the DISK resource?… ==> DISK I/O – the true bottleneck some kind of a resource ▶ Input/output operations per second (IOPS) ▶ I/O PERFORMANCE is the resource one cannot easily scale ▶ Old SAN == fixed performance Solutions ▶ –> NVMe/SSD or Hybrid SAN ▶ –> Hybrid SAN ▶ –> Software Defined Storage (SDS) RAID TYPES ▶ RAID-0 stripping ▶ RAID-1 mirroring ▶ RAID-5 parity ▶ RAID-6 double distributed parity ▶ Nested RAID arrays e.g. RAID-10 ▶ (non-raid stuff) ==> RAID-0 stripping ▶ The fastest array ever (multiply) ▶ The best resource usage / capacity ever (multiply) ▶ But absolutely no redundancy / fault-tolerance (divide) ==> RAID-1 mirroring ▶ Faster read, normal writes ▶ Capacity /2 ▶ Fault-tolerant RAID-5 parity ==> RAID-5 parity ▶ Faster read, normal writes ▶ Capacity N-1 ▶ Fault-tolerance 1 disk (3 disks min) RAID-6 double distributed parity ==> RAID-6 double distributed parity ▶ Faster read, slower writes ▶ Capacity N-2 ▶ Fault-tolerance 2 disks (4 disks min) RAID 1+0 aka 10 (nested) ==> RAID-10 ▶ Numbers in order from the root to leaves ▶ Got advantages of both RAID-1 then RAID-0 ▶ The other way around is called RAID-0+1 Hardware RAID controllers ▶ Hardware RAID ~ volume manager ▶ System will think it’s a disk while it’s not Bad performance w/o write cache
    [Show full text]
  • Virtual Disk Development Kit Programming Guide
    Virtual Disk Development Kit Programming Guide Update 1 VMware vSphere 6.7 Virtual Disk Development Kit 6.7.1 Virtual Disk Development Kit Programming Guide You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you have comments about this documentation, submit your feedback to [email protected] VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com Copyright © 2008–2018 VMware, Inc. All rights reserved. Copyright and trademark information. VMware, Inc. 2 Contents About This Book 9 1 Introduction to the Virtual Disk API 11 About the Virtual Disk API 11 VDDK Components 12 Virtual Disk Library 12 Disk Mount Library 12 Virtual Disk Utilities 12 Backup and Restore on vSphere 12 Backup Design for vCloud Director 13 Use Cases for the Virtual Disk Library 13 Developing for VMware Platform Products 13 Managed Disk and Hosted Disk 14 Advanced Transports 15 VDDK and VADP Compared 15 Platform Product Compatibility 15 Redistributing VDDK Components 15 2 Installing the Development Kit 16 Prerequisites 16 Development Systems 16 Programming Environments 16 VMware Platform Products 17 Storage Device Support 17 Installing the VDDK Package 17 Repackaging VDDK Libraries 18 How to Find VADP Components 19 3 Virtual Disk Interfaces 20 VMDK File Location 20 Virtual Disk Types 20 Persistence Disk Modes 21 VMDK File Naming 21 Thin Provisioned Disk 22 Internationalization and Localization 22 Virtual Disk Internal Format 23 Grain Directories and Grain Tables 23 VMware, Inc. 3 Virtual Disk
    [Show full text]
  • Unbreakable Enterprise Kernel Release Notes for Unbreakable Enterprise Kernel Release 5 Update 1
    Unbreakable Enterprise Kernel Release Notes for Unbreakable Enterprise Kernel Release 5 Update 1 F10511-11 April 2021 Oracle Legal Notices Copyright © 2018, 2021, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • Administering Linux in Production Environments Itinerary
    Administering Linux in Production Environments 2 Itinerary Administering Linux in § Introduction Production Environments § Production Environment Features • Recent Kernel Developments • Filesystems: Mundane and Advanced • Disk Striping and RAID Administering • Parallel Processing and Clustering Linux in Æleen Frisch Production • Enterprise Networking Features Environments [email protected] § Deployment Examples www.aeleen.com • File and Print Servers • Enterprise User Authentication x Copyright © 1999-2001, e ponential Consulting, LLC Exponential Consulting, LLC • Beowulf Compute Servers North Haven, Connecticut, USA • Linux and Databases • Linux as an Office PC 2 3 4 What is a Production System? Commercial Applications § Real world § Major applications are available now: § System is a tool • Databases: Oracle, Sybase, DB2, etc. § “Money” is involved • Computational chemistry: Gaussian 98 Administering Administering Linux in Linux in • CAE: MSC:Nastran Production Production Environments Environments • Others § Keeping up Copyright © 1999-2001, Copyright © 1999-2001, • www.linas.org/linux Exponential Consulting, LLC Exponential Consulting, LLC • www.linuxports.com 3 4 Copyright © 1999-2001, Exponential Consulting, LLC. All rights reserved. Administering Linux in Production Environments 5 6 Dressing for Success § Tuxedo vs. Business Suit Recent § ILM: 11/15/2001 Administering Administering Kernel Linux in Linux in Production Production Environments Environments Developments Copyright © 1999-2001, Copyright © 1999-2001, Exponential
    [Show full text]
  • Developing an In-Kernel File Sharing Server Solution Based on Server Message Block Protocol
    Developing an In-kernel File Sharing Server Solution Based on Server Message Block Protocol Bastian Arjun Shajit School of Electrical Engineering Thesis submitted for examination for the degree of Master of Science in Technology. Espoo 25.09.2016 Thesis supervisors: Prof. Raimo A. Kantola Thesis advisor: M.Sc. Szabolcs Szakacsits aalto university abstract of the school of electrical engineering master’s thesis Author: Bastian Arjun Shajit Title: Developing an In-kernel File Sharing Server Solution Based on Server Message Block Protocol Date: 25.09.2016 Language: English Number of pages: 8+79 Department of Communications and Networking Professorship: Networking Technology Supervisor: Prof. Raimo A. Kantola Advisor: M.Sc. Szabolcs Szakacsits Multi-device and multi-service smart environments make heavy use of the Internet and intra-net, thus constantly transferring and saving large amounts of digital data leading to an exponential data growth. This has led to the development of network storage systems such as Storage Area Networks and Network Attached Storage. Network Attached Storage provides a file system level access to data from storage elements that are connected to the network. One of the most widely used protocols in network storage systems, is the Server Message Block(SMB) protocol, that interconnects users from various operating systems such as Windows, Linux and Mac OS. Samba is a popular open-source user-space server, that implements the SMB protocol. There have been a multitude of discussions about moving traditional user-space applications like web servers to the kernel-space in order to improve various aspects of the server like CPU utilization, memory utilization, memory footprint, context switching, etc.
    [Show full text]