ZFS-Verschlüsselung Und Andere Neuigkeiten in Solaris 11

Total Page:16

File Type:pdf, Size:1020Kb

ZFS-Verschlüsselung Und Andere Neuigkeiten in Solaris 11 Solaris Der sichere und effiziente Umgang mit großen und größten Datenmengen gehört heute sicherlich zu den her- ausragenden Aufgaben eines jeden IT-Infrastruktur-Dienstleisters – egal, ob betriebsintern oder für den freien Markt. Dabei stehen besonders die Datenintegrität, die technischen und organisatorischen Maßnahmen, die diese gewährleisten sollen, und natürlich der Sicherheitsaspekt beim Zugriff auf die gespeicherten Daten im Vordergrund. ZFS-Verschlüsselung und andere Neuigkeiten in Solaris 11 Thomas Nau, Universität Ulm – kiz ZFS hat mit seinem Erscheinen in So- chender redundanter Auslegung der Verschlüsselung mit ZFS laris 10 im Bereich der Datenintegri- Platten-Systeme, auch zu korrigie- Mit der Freigabe von Solaris 11 haben tät nicht nur neue Maßstäbe gesetzt, ren. Hervorzuheben ist hierbei, dass Anwender nun zusätzlich die Möglich- sondern sich in den vergangenen Jah- die Prüfsummen getrennt von den keit, ZFS-Filesysteme und -Volumes mit ren zu einem Maßstab für Filesysteme eigentlichen Daten im sogenannten sicheren Algorithmen wie AES-256 zu schlechthin entwickelt. Alle anderen „Pointer-Bereich“ abgelegt sind. Auf- verschlüsseln. Voreingestellt ist „AES- seither entstandenen und auch zukünf- grund dieser Anordnung lassen sich 128-CCM“. Sofern die eingesetzte Hard- tigen Entwicklungen müssen sich daran die Auswirkungen durch den Ausfall ware Beschleuniger für kryptographi- messen lassen. Die Dynamik der ZFS- einzelner Plattensektoren reduzieren. sche Operationen bietet, etwa Oracle Weiterentwicklung ist jedoch ungebro- • Die besonders wichtigen Metada- T4- und T5-Chips oder auch die Intel chen. Mit Solaris 11 stehen neben Per- ten des Systems, also diejenigen, AES-NI-Erweiterung, werden diese au- formance-Verbesserungen auch neue die die Strukturen von Filesyste- tomatisch vom Solaris-Crypto-Frame- Fähigkeiten zur Verfügung. So besteht men und Pools beschreiben, werden work erkannt und genutzt. Eine Ver- jetzt die Möglichkeit, ZFS-Filesysteme mehrfach und von der übergeord- schlüsselung des Root-Filesystems ist und Volumes durch ZFS-bereitgestellte neten Redundanz unabhängig in derzeit ausschließlich für nicht globale Block-Devices transparent zu verschlüs- sogenannten „Ditto-Blocks“ gespei- Zonen möglich. Hierzu später mehr. seln und diese Datenbereiche dann bei- chert. Auf Wunsch lässt sich dieses Zwei Punkte sind vor dem Einsatz spielsweise für virtualisierte Systeme in Prinzip auch auf die Nutzerdaten er- der Verschlüsselung zu beachten. Zum Zonen zu nutzen. Ebenfalls in Solaris weitern, indem die entsprechenden einen kann sie nur für neu anzulegen- 11 adressiert und vereinfacht wurde die Properties für das ZFS-Filesystem ge- de Filesysteme und Volumes aktiviert Datenmigration von UFS hin zu ZFS. setzt werden. werden und zum anderen ist auch eine spätere Deaktivierung nicht möglich. Grundlegende ZFS-Design-Kriterien ZFS bietet darüber hinaus eine Vielzahl Man bindet sich also für die Lebensdau- Für den Betrieb einer universitären, betrieblicher Vorteile, etwa Snapshots er der Filesysteme. Alle notwendigen zentralen Infrastruktur sind File- und und Slones. Diese kommen vermehrt als Befehle sind in das „zfs“-Kommando- Betriebssysteme notwendig, die weit- Ergänzung beziehungsweise als Ersatz zeilen-Tool integriert. Der Einsatz von reichende Vorkehrungen zum Schutz üblicher Backup-Lösungen zum Einsatz. Verschlüsselung ist auch bei bereits der Datenintegrität sowie der Datensi- Nur so sind mit vergleichsweise gerin- existierenden Pools möglich, solange cherheit bieten und damit vor Daten- gem Aufwand Filesysteme mit meh- diese mindestens die Versionsnummer verlust schützen. Bei der Entwicklung reren 10 Millionen Dateien ein oder 30 tragen oder ein dahingehender Up- von ZFS wurden von Beginn an ins- mehrmals pro Tag aus Sicht der Anwen- grade möglich ist (siehe Listing 2). Die besondere auch die designbedingten dung zeitlich konsistent zu sichern. aktuellen, auf den ehemaligen Open- Schwachstellen herkömmlicher Filesys- Regelmäßiges „scrubbing“, also das Solaris-Quellen basierenden ZFS-Versi- teme korrigiert: Überprüfen aller Prüfsummen, ein- onen in Illumos und FreeBSD bieten schließlich gegebenenfalls notwendi- die Möglichkeit zur Verschlüsselung • Gültige Daten werden niemals über- ger Korrekturen schützt vor unliebsa- nicht. Die zugehörigen ZFS-Properties schrieben (copy-on-write, COW). men Überraschungen, da Probleme lassen sich wie gewohnt auslesen (sie- • Starke Prüfsummen wie „fletcher4“ bereits sehr frühzeitig erkannt werden he Listing 3). oder „sha256“ ermöglichen es, Feh- können. Das nachfolgende Beispiel Der einmal gewählte Verschlüsse- ler auf dem gesamten Datenpfad zu verdeutlicht die Erkennung von Feh- lungsalgorithmus ist nicht änderbar. erkennen und diese, bei entspre- lern (siehe Listing 1). Alle zugehörigen ZFS-Properties wer- DOAG News 3-2013 | 27 Solaris • Prompt # zpool scrub testpool # sleep 15 ; zpool status -v testpool Interaktive Abfrage der „passphra- se“, wenn das Filesystem erzeugt pool: testpool oder „gemountet“ wird state: ONLINE • file://filename status: One or more devices has experienced an unrecoverable error. An attempt was made to correct Der Schlüssel wird aus einer Datei, the error. Applications are unaffected. etwa aus einem USB-Stick, gelesen action: Determine if the device needs to be replaced, and • pkcs11 clear the errors using ‘zpool online’ or replace the Der Schlüssel wird aus einem device with ‘zpool replace’. see: http://www.sun.com/msg/ZFS-8000-9P PKCS#11-Token ausgelesen scrub: scrub in progress, 6.21% done, 0h4m to go • https://location config: Der Schlüssel wird von einem Web- NAME STATE READ WRITE CKSUM Server über eine HTTPS-Verbindung testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 bereitgestellt c4t0d0s0 ONLINE 0 0 0 c4t1d0s0 ONLINE 0 0 58 228.5 repaired Dabei stehen folgende Formate zur Listing 1 Verfügung: • Raw # zfs create -o encryption=aes-256-ccm pool/thomas Enter passphrase for ‘pool/thomas’: Bytefolge Must be at least 8 characters. • Hex Enter passphrase for ‘pool/thomas’: Hexadezimal-String Enter again: • Passphrase Listing 2 Passwort, aus dem der Schlüssel ge- neriert wird den vererbt, also alle nachgeordne- Die Änderung spiegelt sich direkt in ten Filesysteme, insbesondere auch den Properties wieder (siehe Listing 4). Achtung: Beim Aushängen („um- Snapshots und Clones, sind ebenfalls Der neue Schlüssel kommt dann für ount“) eines verschlüsselten Filesys- zwingend verschlüsselt. alle ab dem Änderungszeitpunkt ge- tems werden die Schlüssel nicht aus Die bei ZFS zum Einsatz kommen- schriebenen neuen Daten zum Einsatz. dem Kernel entfernt, daher kann ein de Schlüsselverwaltung ist zweistufig. Änderungen dieses internen Schlüssels Administrator das Filesystem bis zu Nach außen sichtbar ist der sogenann- sollten eher selten und im Einklang mit einem „reboot“ wieder „mounten“ – te „Wrapping Key“. Er wird dem System den lokal vorgegebenen Sicherheits- und zwar ohne dass der Schlüssel be- beispielsweise aus einer „passphrase“ empfehlungen vorgenommen werden. nötigt wird. Um dies zu verhindern, generiert. Alternativ kann der Schlüssel Das National Institute of Standards and muss der Encryption Key des infrage auch als Byte-Folge oder Hexadezimal- Technology (NIST) empfiehlt, derartige kommenden Filesystems oder Volu- String bereitgestellt werden. Der Wrap- Schlüssel alle zwei Jahre zu wechseln. mes mit „# zfs key -u pool/thomas“ ping Key dient ausschließlich dazu, den Gänzlich anders verhält es sich mit explizit aus dem Kernel entfernt wer- eigentlichen, vom Kernel verwende- den zu jedem verschlüsselten ZFS-Vo- den. Die folgenden Beispiele verdeut- ten Schlüssel, den „Encryption Key“, lume beziehungsweise -Filesystem ge- lichen den einfachen Umgang mit den zu schützen. Beim Setzen einer neuen hörenden individuellen Schlüsseln Verschlüsselungsfähigkeiten von ZFS. „passphrase“ verschlüsselt das System und „passphrases“. Diese lassen sich Wechsel des Schlüssels (siehe Listing nur den vorhandenen Encryption Key nach Belieben ändern. Um sie vom 5), Laden des Schlüssels in den Kernel neu. Der Datenbestand bleibt davon Benutzer abzufragen oder auch aus ei- (siehe Listing 6) und Wechsel des Be- unberührt, die Daten werden also nicht nem Gerät auszulesen, stehen mehrere reitstellungsmechanismus, etwa aus ei- komplett neu kodiert, sondern bleiben Mechanismen und Formate zur Verfü- ner Datei (siehe Listing 7). Diese ist im mit den jeweiligen alten Encryption gung. Dies sind derzeit: Idealfall auf einem leicht entfernbaren Keys verschlüsselt. Dies gilt übrigens auch für die Änderung des Encryption # zfs get \ Key durch den Solaris-Kern. Hier wird encryption,keychangedate,keysource,keystatus,rekeydate \ lediglich eine ausreichend lange Byte- pool/thomas NAME PROPERTY VALUE SOURCE Folge vom System zufällig ausgewählt. pool/thomas encryption aes-256-ccm local Hier nutzt das System mittels Solaris pool/thomas keychangedate Sat Sep 15 18:03 2012 local Crypto Framework gegebenenfalls vor- pool/thomas keysource passphrase,prompt local handene Zufallszahlen-Generatoren. pool/thomas keystatus available - pool/thomas rekeydate Sat Sep 15 18:03 2012 local Das Kommando „# zfs key -K pool/ thomas“ übernimmt diese Aufgabe. Listing 3 28 | www.doag.org Solaris # zfs get keychangedate,rekeydate pool/thomas der Zone geschützt wird. Dieser Schutz NAME PROPERTY VALUE SOURCE kann nach Wahl sehr strikt ausfal- pool/thomas keychangedate Sat Sep 15 18:03 2012 local len, also keinerlei Ausnahmen zulas- pool/thomas rekeydate Sat Sep 15 18:06 2012 local sen, oder auch flexibel gestaltet sein.
Recommended publications
  • Protocols: 0-9, A
    Protocols: 0-9, A • 3COM-AMP3, on page 4 • 3COM-TSMUX, on page 5 • 3PC, on page 6 • 4CHAN, on page 7 • 58-CITY, on page 8 • 914C G, on page 9 • 9PFS, on page 10 • ABC-NEWS, on page 11 • ACAP, on page 12 • ACAS, on page 13 • ACCESSBUILDER, on page 14 • ACCESSNETWORK, on page 15 • ACCUWEATHER, on page 16 • ACP, on page 17 • ACR-NEMA, on page 18 • ACTIVE-DIRECTORY, on page 19 • ACTIVESYNC, on page 20 • ADCASH, on page 21 • ADDTHIS, on page 22 • ADOBE-CONNECT, on page 23 • ADWEEK, on page 24 • AED-512, on page 25 • AFPOVERTCP, on page 26 • AGENTX, on page 27 • AIRBNB, on page 28 • AIRPLAY, on page 29 • ALIWANGWANG, on page 30 • ALLRECIPES, on page 31 • ALPES, on page 32 • AMANDA, on page 33 • AMAZON, on page 34 • AMEBA, on page 35 • AMAZON-INSTANT-VIDEO, on page 36 Protocols: 0-9, A 1 Protocols: 0-9, A • AMAZON-WEB-SERVICES, on page 37 • AMERICAN-EXPRESS, on page 38 • AMINET, on page 39 • AN, on page 40 • ANCESTRY-COM, on page 41 • ANDROID-UPDATES, on page 42 • ANET, on page 43 • ANSANOTIFY, on page 44 • ANSATRADER, on page 45 • ANY-HOST-INTERNAL, on page 46 • AODV, on page 47 • AOL-MESSENGER, on page 48 • AOL-MESSENGER-AUDIO, on page 49 • AOL-MESSENGER-FT, on page 50 • AOL-MESSENGER-VIDEO, on page 51 • AOL-PROTOCOL, on page 52 • APC-POWERCHUTE, on page 53 • APERTUS-LDP, on page 54 • APPLEJUICE, on page 55 • APPLE-APP-STORE, on page 56 • APPLE-IOS-UPDATES, on page 57 • APPLE-REMOTE-DESKTOP, on page 58 • APPLE-SERVICES, on page 59 • APPLE-TV-UPDATES, on page 60 • APPLEQTC, on page 61 • APPLEQTCSRVR, on page 62 • APPLIX, on page 63 • ARCISDMS,
    [Show full text]
  • Persistent 9P Sessions for Plan 9
    Persistent 9P Sessions for Plan 9 Gorka Guardiola, [email protected] Russ Cox, [email protected] Eric Van Hensbergen, [email protected] ABSTRACT Traditionally, Plan 9 [5] runs mainly on local networks, where lost connections are rare. As a result, most programs, including the kernel, do not bother to plan for their file server connections to fail. These programs must be restarted when a connection does fail. If the kernel’s connection to the root file server fails, the machine must be rebooted. This approach suffices only because lost connections are rare. Across long distance networks, where connection failures are more common, it becomes woefully inadequate. To address this problem, we wrote a program called recover, which proxies a 9P session on behalf of a client and takes care of redialing the remote server and reestablishing con- nection state as necessary, hiding network failures from the client. This paper presents the design and implementation of recover, along with performance benchmarks on Plan 9 and on Linux. 1. Introduction Plan 9 is a distributed system developed at Bell Labs [5]. Resources in Plan 9 are presented as synthetic file systems served to clients via 9P, a simple file protocol. Unlike file protocols such as NFS, 9P is stateful: per-connection state such as which files are opened by which clients is maintained by servers. Maintaining per-connection state allows 9P to be used for resources with sophisticated access control poli- cies, such as exclusive-use lock files and chat session multiplexers. It also makes servers easier to imple- ment, since they can forget about file ids once a connection is lost.
    [Show full text]
  • Virtfs—A Virtualization Aware File System Pass-Through
    VirtFS—A virtualization aware File System pass-through Venkateswararao Jujjuri Eric Van Hensbergen Anthony Liguori IBM Linux Technology Center IBM Research Austin IBM Linux Technology Center [email protected] [email protected] [email protected] Badari Pulavarty IBM Linux Technology Center [email protected] Abstract operations into block device operations and then again into host file system operations. This paper describes the design and implementation of In addition to performance improvements over a tradi- a paravirtualized file system interface for Linux in the tional virtual block device, exposing guest file system KVM environment. Today’s solution of sharing host activity to the hypervisor provides greater insight to the files on the guest through generic network file systems hypervisor about the workload the guest is running. This like NFS and CIFS suffer from major performance and allows the hypervisor to make more intelligent decisions feature deficiencies as these protocols are not designed with respect to I/O caching and creates new opportuni- or optimized for virtualization. To address the needs of ties for hypervisor-based services like de-duplification. the virtualization paradigm, in this paper we are intro- ducing a new paravirtualized file system called VirtFS. In Section 2 of this paper, we explore more details about This new file system is currently under development and the motivating factors for paravirtualizing the file sys- is being built using QEMU, KVM, VirtIO technologies tem layer. In Section 3, we introduce the VirtFS design and 9P2000.L protocol. including an overview of the 9P protocol, which VirtFS is based on, along with a set of extensions introduced for greater Linux guest compatibility.
    [Show full text]
  • Looking to the Future by JOHN BALDWIN
    1 of 3 Looking to the Future BY JOHN BALDWIN FreeBSD’s 13.0 release delivers new features to users and refines the workflow for new contri- butions. FreeBSD contributors have been busy fixing bugs and adding new features since 12.0’s release in December of 2018. In addition, FreeBSD developers have refined their vision to focus on FreeBSD’s future users. An abbreviated list of some of the changes in 13.0 is given below. A more detailed list can be found in the release notes. Shifting Tools Not all of the changes in the FreeBSD Project over the last two years have taken the form of patches. Some of the largest changes have been made in the tools used to contribute to FreeBSD. The first major change is that FreeBSD has switched from Subversion to Git for storing source code, documentation, and ports. Git is widely used in the software industry and is more familiar to new contribu- tors than Subversion. Git’s distributed nature also more easily facilitates contributions from individuals who are Not all of the changes in the not committers. FreeBSD had been providing Git mir- rors of the Subversion repositories for several years, and FreeBSD Project over the last many developers had used Git to manage in-progress patches. The Git mirrors have now become the offi- two years have taken the form cial repositories and changes are now pushed directly of patches. to Git instead of Subversion. FreeBSD 13.0 is the first release whose sources are only available via Git rather than Subversion.
    [Show full text]
  • Virtualization Best Practices
    SUSE Linux Enterprise Server 15 SP1 Virtualization Best Practices SUSE Linux Enterprise Server 15 SP1 Publication Date: September 24, 2021 Contents 1 Virtualization Scenarios 2 2 Before You Apply Modifications 2 3 Recommendations 3 4 VM Host Server Configuration and Resource Allocation 3 5 VM Guest Images 25 6 VM Guest Configuration 36 7 VM Guest-Specific Configurations and Settings 42 8 Xen: Converting a Paravirtual (PV) Guest to a Fully Virtual (FV/HVM) Guest 45 9 External References 49 1 SLES 15 SP1 1 Virtualization Scenarios Virtualization oers a lot of capabilities to your environment. It can be used in multiple scenarios. To get more details about it, refer to the Book “Virtualization Guide” and in particular, to the following sections: Book “Virtualization Guide”, Chapter 1 “Virtualization Technology”, Section 1.2 “Virtualization Capabilities” Book “Virtualization Guide”, Chapter 1 “Virtualization Technology”, Section 1.3 “Virtualization Benefits” This best practice guide will provide advice for making the right choice in your environment. It will recommend or discourage the usage of options depending on your workload. Fixing conguration issues and performing tuning tasks will increase the performance of VM Guest's near to bare metal. 2 Before You Apply Modifications 2.1 Back Up First Changing the conguration of the VM Guest or the VM Host Server can lead to data loss or an unstable state. It is really important that you do backups of les, data, images, etc. before making any changes. Without backups you cannot restore the original state after a data loss or a misconguration. Do not perform tests or experiments on production systems.
    [Show full text]
  • ZFS: Love Your Data
    ZFS: Love Your Data Neal H. Waleld LinuxCon Europe, 14 October 2014 ZFS Features I Security I End-to-End consistency via checksums I Self Healing I Copy on Write Transactions I Additional copies of important data I Snapshots and Clones I Simple, Incremental Remote Replication I Easier Administration I One shared pool rather than many statically-sized volumes I Performance Improvements I Hierarchical Storage Management (HSM) I Pooled Architecture =) shared IOPs I Developed for many-core systems I Scalable 128 I Pool Address Space: 2 bytes I O(1) operations I Fine-grained locks I On-disk data is protected by ECC I But, doesn't correct / catch all errors Silent Data Corruption I Data errors that are not caught by hard drive I = Read returns dierent data from what was written Silent Data Corruption I Data errors that are not caught by hard drive I = Read returns dierent data from what was written I On-disk data is protected by ECC I But, doesn't correct / catch all errors Uncorrectable Errors By Cory Doctorow, CC BY-SA 2.0 I Reported as BER (Bit Error Rate) I According to Data Sheets: 14 I Desktop: 1 corrupted bit per 10 (12 TB) 15 I Enterprise: 1 corrupted bit per 10 (120 TB) ∗ I Practice: 1 corrupted sector per 8 to 20 TB ∗Je Bonwick and Bill Moore, ZFS: The Last Word in File Systems, 2008 Types of Errors I Bit Rot I Phantom writes I Misdirected read / write 8 9 y I 1 per 10 to 10 IOs I = 1 error per 50 to 500 GB (assuming 512 byte IOs) I DMA Parity Errors By abdallahh, CC BY-SA 2.0 I Software / Firmware Bugs I Administration Errors yUlrich
    [Show full text]
  • Persistent 9P Sessions for Plan 9
    Persistent 9P Sessions for Plan 9 Gorka Guardiola, [email protected] Russ Cox, [email protected] Eric Van Hensbergen, [email protected] ABSTRACT Traditionally, Plan 9 [5] runs mainly on local networks, where lost connections are rare. As a result, most programs, including the kernel, do not bother to plan for their file server connections to fail. These programs must be restarted when a connection does fail. If the kernel’s connection to the root file server fails, the machine must be rebooted. This approach suffices only because lost connections are rare. Across long distance networks, where connection failures are more common, it becomes woefully inadequate. To address this problem, we wrote a program called recover, which proxies a 9P session on behalf of a client and takes care of redialing the remote server and reestablishing connec- tion state as necessary, hiding network failures from the client. This paper presents the design and implementation of recover, along with performance benchmarks on Plan 9 and on Linux. 1. Introduction Plan 9 is a distributed system developed at Bell Labs [5]. Resources in Plan 9 are presented as synthetic file systems served to clients via 9P, a simple file protocol. Unlike file protocols such as NFS, 9P is stateful: per-connection state such as which files are opened by which clients is maintained by servers. Maintaining per-connection state allows 9P to be used for resources with sophisticated access control poli- cies, such as exclusive-use lock files and chat session multiplexers. It also makes servers easier to imple- ment, since they can forget about file ids once a connection is lost.
    [Show full text]
  • [MS-SMB]: Server Message Block (SMB) Protocol
    [MS-SMB]: Server Message Block (SMB) Protocol Intellectual Property Rights Notice for Open Specifications Documentation . Technical Documentation. Microsoft publishes Open Specifications documentation for protocols, file formats, languages, standards as well as overviews of the interaction among each of these technologies. Copyrights. This documentation is covered by Microsoft copyrights. Regardless of any other terms that are contained in the terms of use for the Microsoft website that hosts this documentation, you may make copies of it in order to develop implementations of the technologies described in the Open Specifications and may distribute portions of it in your implementations using these technologies or your documentation as necessary to properly document the implementation. You may also distribute in your implementation, with or without modification, any schema, IDL’s, or code samples that are included in the documentation. This permission also applies to any documents that are referenced in the Open Specifications. No Trade Secrets. Microsoft does not claim any trade secret rights in this documentation. Patents. Microsoft has patents that may cover your implementations of the technologies described in the Open Specifications. Neither this notice nor Microsoft's delivery of the documentation grants any licenses under those or any other Microsoft patents. However, a given Open Specification may be covered by Microsoft Open Specification Promise or the Community Promise. If you would prefer a written license, or if the technologies described in the Open Specifications are not covered by the Open Specifications Promise or Community Promise, as applicable, patent licenses are available by contacting [email protected]. Trademarks. The names of companies and products contained in this documentation may be covered by trademarks or similar intellectual property rights.
    [Show full text]
  • Of File Systems and Storage Models
    Chapter 4 Of File Systems and Storage Models Disks are always full. It is futile to try to get more disk space. Data expands to fill any void. –Parkinson’sLawasappliedto disks 4.1 Introduction This chapter deals primarily with how we store data. Virtually all computer systems require some way to store data permanently; even so-called “diskless” systems do require access to certain files in order to boot, run and be useful. Albeit stored remotely (or in memory), these bits reside on some sort of storage system. Most frequently, data is stored on local hard disks, but over the last few years more and more of our files have moved “into the cloud”, where di↵erent providers o↵er easy access to large amounts of storage over the network. We have more and more computers depending on access to remote systems, shifting our traditional view of what constitutes a storage device. 74 CHAPTER 4. OF FILE SYSTEMS AND STORAGE MODELS 75 As system administrators, we are responsible for all kinds of devices: we build systems running entirely without local storage just as we maintain the massive enterprise storage arrays that enable decentralized data replication and archival. We manage large numbers of computers with their own hard drives, using a variety of technologies to maximize throughput before the data even gets onto a network. In order to be able to optimize our systems on this level, it is important for us to understand the principal concepts of how data is stored, the di↵erent storage models and disk interfaces.Itisimportanttobeawareofcertain physical properties of our storage media, and the impact they, as well as certain historic limitations, have on how we utilize disks.
    [Show full text]
  • Secure Compute Customization
    ORNL/TM-2015/210 Review of Enabling Technologies to Facilitate Secure Compute Customization Approved for public release; distribution is unlimited. Ferrol Aderholdt Blake Caldwell Susan Hicks Scott Koch Thomas Naughton Daniel Pelfrey James Pogge Stephen L. Scott Galen Shipman Lawrence Sorrillo December 2014 DOCUMENT AVAILABILITY Reports produced after January 1, 1996, are generally available free via US Department of Energy (DOE) SciTech Connect. Website: http://www.osti.gov/scitech/ Reports produced before January 1, 1996, may be purchased by members of the public from the following source: National Technical Information Service 5285 Port Royal Road Springfield, VA 22161 Telephone: 703-605-6000 (1-800-553-6847) TDD: 703-487-4639 Fax: 703-605-6900 E-mail: [email protected] Website: http://www.ntis.gov/help/ordermethods.aspx Reports are available to DOE employees, DOE contractors, Energy Technology Data Exchange representatives, and International Nuclear Information System representatives from the following source: Office of Scientific and Technical Information PO Box 62 Oak Ridge, TN 37831 Telephone: 865-576-8401 Fax: 865-576-5728 E-mail: [email protected] Website: http://www.osti.gov/contact.html This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their em- ployees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial prod- uct, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorse- ment, recommendation, or favoring by the United States Govern- ment or any agency thereof.
    [Show full text]
  • The Evolution of File Systems
    The Evolution of File Systems Thomas Rivera, Hitachi Data Systems Craig Harmer, April 2011 SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under the following conditions: Any slide or slides used must be reproduced without modification The SNIA must be acknowledged as source of any material used in the body of any document containing material from these presentations. This presentation is a project of the SNIA Education Committee. Neither the Author nor the Presenter is an attorney and nothing in this presentation is intended to be nor should be construed as legal advice or opinion. If you need legal advice or legal opinion please contact an attorney. The information presented herein represents the Author's personal opinion and current understanding of the issues involved. The Author, the Presenter, and the SNIA do not assume any responsibility or liability for damages arising out of any reliance on or use of this information. NO WARRANTIES, EXPRESS OR IMPLIED. USE AT YOUR OWN RISK. The Evolution of File Systems 2 © 2012 Storage Networking Industry Association. All Rights Reserved. 2 Abstract The File Systems Evolution Over time additional file systems appeared focusing on specialized requirements such as: data sharing, remote file access, distributed file access, parallel files access, HPC, archiving, security, etc. Due to the dramatic growth of unstructured data, files as the basic units for data containers are morphing into file objects, providing more semantics and feature- rich capabilities for content processing This presentation will: Categorize and explain the basic principles of currently available file system architectures (e.g.
    [Show full text]
  • Refuse: Userspace FUSE Reimplementation Using Puffs
    ReFUSE: Userspace FUSE Reimplementation Using puffs Antti Kantee Alistair Crooks Helsinki University of Technology The NetBSD Foundation [email protected].fi [email protected] Abstract for using Gmail as a file system storage backend and FUSEPod [5] which facilitaties iPod access and man- In an increasingly diverse and splintered world, agement. interoperability rules. The ability to leverage code Userspace file systems operate by attaching an in- written for another platform means more time and re- kernel file system to the kernel’s virtual file system sources for doing new and exciting research instead layer, vfs [17]. This component transforms incoming of reinventing the wheel. Interoperability requires requests into a form suitable for delivery to userspace, standards, and as the saying goes, the best part of sends the request to userspace, waits for a response, standards is that everyone can have their own. How- interprets the result and feeds it back to caller in the ever, in the userspace file system world, the Linux- kernel. The kernel file system calling conventions originated FUSE is the clear yardstick. dictate how to interface with the virtual file system In this paper we present ReFUSE, a userspace layer, but other than that the userspace file systems implementation of the FUSE interface on top of the framework is free to decide how to operate and what NetBSD native puffs (Pass-to-Userspace Framework kind of interface to provide to userspace. File System) userspace file systems framework. We While extending the operating system to userspace argue that an additional layer of indirection is the is not a new idea [12, 21], the FUSE [2] userspace right solution here, as it allows for a more natural file systems interface is the first to become a veri- export of the kernel file system interface instead of table standard.
    [Show full text]