Bringing ZFS Information Into SNMP

Total Page:16

File Type:pdf, Size:1020Kb

Bringing ZFS Information Into SNMP Bringing ZFS information into SNMP Thomas Stibor GSI Helmholtz Centre for Heavy Ion Research, HPC 27. Januar 2014 What is SNMP? • Simple Network Management Protocol (SNMP) is protocol for network management. • It allows collecting information from switches, printers, linux-boxes, . and also to configure (write access) those. thomas@lxdv65:~>snmpget -v 1 -c public localhost 1.3.6.1.2.1.1.1.0 iso.3.6.1.2.1.1.1.0 = STRING: "Linux lxdv65 3.8.13-tstibor-lxdv65-rev1 #1 SMP Wed May 15 12:32:59 CEST 2013 x86_64" thomas@lxdv65:~>snmpget -v 1 -c public localhost 1.3.6.1.2.1.25.1.4.0 iso.3.6.1.2.1.25.1.4.0 = STRING: "BOOT_IMAGE=/boot/vmlinuz-3.8.13-tstibor-lxdv65-rev1 root=/dev/mapper/vg0-debian ro quiet" What are these strange looking numbers, e.g. 1.3.6.1.2.1.25.1.4.0? • Each Object Identifier (short OID) identifies a variable that can be read or set via SNMP. • OID(s) are organized hierarchically. OID(s) as a Tree (snmpwalk) thomas@lxdv65:~> snmpwalk -c public -v 2c localhost 1.3.6.1.4.1.2021.9.1 iso.3.6.1.4.1.2021.9.1.1.1 = INTEGER: 1 iso.3.6.1.4.1.2021.9.1.1.2 = INTEGER: 2 ... iso.3.6.1.4.1.2021.9.1.1.12 = INTEGER: 12 iso.3.6.1.4.1.2021.9.1.2.1 = STRING: "/" iso.3.6.1.4.1.2021.9.1.2.2 = STRING: "/sys" ... iso.3.6.1.4.1.2021.9.1.2.12 = STRING: "/zfs/pools-deduplication" iso.3.6.1.4.1.2021.9.1.3.1 = STRING: "rootfs" iso.3.6.1.4.1.2021.9.1.3.2 = STRING: "sysfs" ... iso.3.6.1.4.1.2021.9.1.3.5 = STRING: "devpts" iso.3.6.1.4.1.2021.9.1.3.6 = STRING: "tmpfs" 1.3.6.1.4.1.2021.9.1 1 2 3 1 2 . 12 1 2 . 12 1 2 . 6 Human Readable OID(s) Given an OID • What is the semantic meaning of e.g. thomas@lxdv65:~>snmpget -v 1 -c public localhost 1.3.6.1.2.1.25.1.6.0 iso.3.6.1.2.1.25.1.6.0 = Gauge32: 597 • Is there a description giving us more information? thomas@lxdv65:~>snmptranslate -m SNMPv2-MIB 1.3.6.1.2.1.1.1 SNMPv2-MIB::sysDescr thomas@lxdv65:~>snmptranslate -m SNMPv2-MIB -On -Td 1.3.6.1.2.1.1.1 .1.3.6.1.2.1.1.1 sysDescr OBJECT-TYPE -- FROM SNMPv2-MIB -- TEXTUAL CONVENTION DisplayString SYNTAX OCTET STRING (0..255) DISPLAY-HINT "255a" MAX-ACCESS read-only STATUS current DESCRIPTION "A textual description of the entity. This value should include the full name and version identification of the system’s hardware type, software operating-system, and networking software." ::= { iso(1) org(3) dod(6) internet(1) mgmt(2) mib-2(1) system(1) 1 } These information are provided in Management Information Base (short MIB) file(s). Desired ZFS Information bringing into SNMP thomas@lxdv65:~>sudo zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT domov-0 178M 244K 178M 0% 1.00x DEGRADED - domov-1 178M 235K 178M 0% 1.00x ONLINE - domov-2 178M 235K 178M 0% 1.00x ONLINE - domov-3 178M 28.5M 150M 15% 1.00x ONLINE - domov-4 178M 235K 178M 0% 1.00x ONLINE - domov-5 178M 235K 178M 0% 1.00x ONLINE - domov-6 178M 235K 178M 0% 1.00x ONLINE - domov-7 178M 599K 177M 0% 2048.00x ONLINE - thomas@lxdv65:~>sudo zfs get all | grep "avail\|used " domov-0used 163K - Specify MIB file to bring domov-0available 86.4M - domov-1used 157K - the ZFS information in- domov-1available 86.4M - domov-2used 157K - to SNMP. domov-2available 86.4M - domov-3used 18.9M - domov-3available 67.6M - domov-4used 157K - domov-4available 86.4M - domov-5used 157K - domov-5available 86.4M - domov-6used 157K - domov-6available 86.4M - domov-7used 256M - domov-7available 86.2M - ZFS-MIB File ZFS-MIB.txt ZFS-MIB DEFINITIONS ::= BEGIN IMPORTS OBJECT-TYPE, MODULE-IDENTITY, enterprises, Counter64, Integer32 FROM SNMPv2-SMI ... -- -- A brief description and update information about the ZFS-MIB. -- zfs MODULE-IDENTITY LAST-UPDATED "201312190000Z" ORGANIZATION "GSI" CONTACT-INFO "[email protected]" DESCRIPTION "This MIB module describes read-only ZFS information gathered through libzfs. This encompasses the health status, available used and total space, as well as compression and deduplication ratio of pools." REVISION "201312190000Z" DESCRIPTION "Initial revision." ::= { hpc 1 } ZFS-MIB File (cont.) ZFS-MIB.txt ... ZFSUnsigned64 ::= TEXTUAL-CONVENTION DISPLAY-HINT "d" STATUS current DESCRIPTION "A 64 bits unsigned (which doesn’t exist in SMIv2) containing any unsigned 64 bits integer number. It is defined as a Counter64 but doesn’t carry the counter semantic" SYNTAX Counter64 -- We are hosted under GSI OID (2021). gsi OBJECTIDENTIFIER::={enterprises2021} hpc OBJECTIDENTIFIER::={gsi255} poolTable OBJECT-TYPE SYNTAX SEQUENCE OF PoolEntry MAX-ACCESS not-accessible STATUS current DESCRIPTION "ZFS Pool watching information." ::= { zfs 1 } poolEntry OBJECT-TYPE SYNTAX PoolEntry MAX-ACCESS not-accessible STATUS current DESCRIPTION "An entry containing information on a ZFS pool." INDEX { poolIndex } ::= { poolTable 1 } ... ZFS-MIB File (cont.) ZFS-MIB.txt ... PoolEntry ::= SEQUENCE { poolIndex Integer32, --1 poolName DisplayString, -- 2 poolHealth DisplayString, -- 3 poolAvail ZFSUnsigned64, -- 4 poolUsed ZFSUnsigned64, -- 5 poolTotal ZFSUnsigned64, -- 6 poolCompressRatio DisplayString -- 7 poolDedupRatio DisplayString -- 8 } poolIndex OBJECT-TYPE SYNTAX Integer32 (0..255) MAX-ACCESS read-only STATUS current DESCRIPTION "Reference Index for each observed ZFS pool." ::= { poolEntry 1 } poolName OBJECT-TYPE SYNTAX DisplayString (SIZE (0..255)) MAX-ACCESS read-only STATUS current DESCRIPTION "Name of ZFS pool." ::= { poolEntry 2 } ... Inspect our ZFS-MIB File thomas@lxdv65:~>snmptranslate -Tp -IR ZFS-MIB::poolTable +--poolTable(1) | +--poolEntry(1) | Index: poolIndex | +-- -R-- Integer32 poolIndex(1) | Range:0..255 +-- -R-- String poolName(2) | Textual Convention: DisplayString | Size:0..255 +-- -R-- String poolHealth(3) | Textual Convention: DisplayString | Size:0..15 +-- -R-- Counter64 poolAvail(4) | Textual Convention: ZFSUnsigned64 +-- -R-- Counter64 poolUsed(5) | Textual Convention: ZFSUnsigned64 +-- -R-- Counter64 poolTotal(6) | Textual Convention: ZFSUnsigned64 +-- -R-- String poolCompressRatio(7) | Textual Convention: DisplayString | Size:0..15 +-- -R-- String poolDedupRatio(8) Textual Convention: DisplayString Size: 0..15 Inspect our ZFS-MIB File (cont.) thomas@lxdv65:~>snmptranslate -On -Td ZFS-MIB::poolHealth .1.3.6.1.4.1.2021.255.1.1.1.3 poolHealth OBJECT-TYPE -- FROM ZFS-MIB -- TEXTUAL CONVENTION DisplayString SYNTAX OCTET STRING (0..15) DISPLAY-HINT "255a" MAX-ACCESS read-only STATUS current DESCRIPTION "Health status of ZFS pool." ::= { iso(1) org(3) dod(6) internet(1) private(4) enterprises(1) gsi(2021) hpc(255) zfs(1) poolTable(1) poolEntry(1) 3 } • Howto implement a SNMP sub-agent daemon, once we specified our MIB file? Excellent starting point: http://www.net-snmp.org/wiki/index.php/Tutorials From MIB ⇒ C thomas@lxdv65:~>env MIBS="+ZFS-MIB" mib2c -c mib2c.iterate.conf poolTable # poolTable.h poolTable.c /* * Note: this file originally auto-generated by mib2c using * : mib2c.iterate.conf 17821 2009-11-11 09:00:00Z dts12 */ #ifndef POOLTABLE_H #define POOLTABLE_H /* function declarations */ void init_poolTable(void); void initialize_table_poolTable(void); Netsnmp_Node_Handler poolTable_handler; Netsnmp_First_Data_Point poolTable_get_first_data_point; Netsnmp_Next_Data_Point poolTable_get_next_data_point; /* column number definitions for table poolTable */ #define COLUMN_POOLINDEX 1 #define COLUMN_POOLNAME 2 #define COLUMN_POOLHEALTH 3 #define COLUMN_POOLAVAIL 4 #define COLUMN_POOLUSED 5 #define COLUMN_POOLTOTAL 6 #define COLUMN_POOLCOMPRESSRATIO 7 #define COLUMN_POOLDEDUPRATIO 8 #endif /* POOLTABLE_H */ From MIB ⇒ C (cont.) /** Handles requests for the poolTable entries. */ int poolTable_handler(netsnmp_mib_handler *handler, netsnmp_handler_registration *reginfo, netsnmp_agent_request_info *reqinfo, netsnmp_request_info *requests) { netsnmp_request_info *request; netsnmp_table_request_info *table_info; struct poolTable_entry *table_entry; char result[ZFS_MAXPROPLEN]; switch (reqinfo->mode) { case MODE_GET: for (request=requests; request; request=request->next) { table_entry = (struct poolTable_entry *) netsnmp_extract_iterator_context(request); table_info = netsnmp_extract_table_info(request); switch (table_info->colnum) { ... case COLUMN_POOLHEALTH: if ( !table_entry ) { netsnmp_set_request_error(reqinfo, request, SNMP_NOSUCHINSTANCE); continue; } /* Pool health. */ if (get_zpool_prop(libzfs_handle, table_entry->poolName, ZPOOL_PROP_HEALTH, result) == ERROR) netsnmp_set_request_error(reqinfo, request, SNMP_NOSUCHINSTANCE); else { strcpy(table_entry->poolHealth, result); table_entry->poolHealth_len = strlen(result); snmp_set_var_typed_value(request->requestvb, ASN_OCTET_STR, (u_char*)table_entry->poolHealth, table_entry->poolHealth_len); } Ask libzfs (/dev/zfs) for ZFS Information First approach, don’t do that! #define COMMAND_ARCSTATS "/bin/cat /proc/spl/kstat/zfs/arcstats" #define COMMAND_ZPOOL_HEALTH "/usr/local/sbin/zpool list -H -o name,health" #define COMMAND_ZGET_AVAIL_USED "/usr/local/sbin/zfs get -Hpo value used,available" /* Open the command for reading. */ fp = popen(COMMAND_ZPOOL_HEALTH, "r"); if (fp == NULL) { perror("popen" ); return ERROR; } i = 0; while (fgets(line, sizeof(line)-1, fp) != NULL) { line_dup = strdup(line); while ((tok_str = strsep(&line_dup, "\t"))) { if (n_token == 0) { //printf("poolname: %s\n", tok_str); name_temp = strdup(tok_str); } else if (n_token == 1) { tok_str[strlen(tok_str)-1] = ’\0’; /* Remove CR */ health_temp = strdup(tok_str); } else { free_pool_info(pool_info); return ERROR; } n_token = (n_token + 1) % 2; } pool_info[i] = malloc(sizeof(pool_info_t)); strcpy(pool_info[i]->name, name_temp); strcpy(pool_info[i]->health, health_temp); ... Ask libzfs (/dev/zfs) for
Recommended publications
  • Userland Tools and Techniques for Board Bring up and Systems Integration
    Userland Tools and Techniques for Board Bring Up and Systems Integration ELC 2012 HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Agenda * Introduction * What? * Why bother with userland? * Common SoC interfaces * Typical Scenario * Kernel setup * GPIO/UART/I2C/SPI/Other * Questions HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Introduction * SoC offer a lot of integrated functionality * System designs differ by outside parts * Most mobile systems are SoC * "CPU boards" for SoCs * Available BSP for starting * Vendor or other sources * Common Unique components * Memory (RAM) * Storage ("flash") * IO * Displays HY Research LLC * Power supplies http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC What? * IO related items * I2C * SPI * UART * GPIO * USB HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Why userland? * Easier for non kernel savy * Quicker turn around time * Easier to debug * Often times available already Sample userland from BSP/LSP vendor * Kernel driver is not ready HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Common SoC interfaces Most SoC have these and more: * Pinmux * UART * GPIO * I2C * SPI * USB * Not discussed: Audio/Displays HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Typical Scenario Custom board: * Load code/Bring up memory * Setup memory controller for part used * Load Linux * Toggle lines on board to verify Prototype based on demo/eval board: * Start with board at a shell prompt * Get newly attached hw to work HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Kernel Setup Additions to typical configs: * Enable UART support (typically done already) * Enable I2C support along with drivers for the SoC * (CONFIG_I2C + other) * Enable SPIdev * (CONFIG_SPI + CONFIG_SPI_SPIDEV) * Add SPIDEV to board file * Enable GPIO sysfs * (CONFIG_GPIO + other + CONFIG_GPIO_SYSFS) * Enable USB * Depends on OTG vs normal, etc.
    [Show full text]
  • Flexible Lustre Management
    Flexible Lustre management Making less work for Admins ORNL is managed by UT-Battelle for the US Department of Energy How do we know Lustre condition today • Polling proc / sysfs files – The knocking on the door model – Parse stats, rpc info, etc for performance deviations. • Constant collection of debug logs – Heavy parsing for common problems. • The death of a node – Have to examine kdumps and /or lustre dump Origins of a new approach • Requirements for Linux kernel integration. – No more proc usage – Migration to sysfs and debugfs – Used to configure your file system. – Started in lustre 2.9 and still on going. • Two ways to configure your file system. – On MGS server run lctl conf_param … • Directly accessed proc seq_files. – On MSG server run lctl set_param –P • Originally used an upcall to lctl for configuration • Introduced in Lustre 2.4 but was broken until lustre 2.12 (LU-7004) – Configuring file system works transparently before and after sysfs migration. Changes introduced with sysfs / debugfs migration • sysfs has a one item per file rule. • Complex proc files moved to debugfs • Moving to debugfs introduced permission problems – Only debugging files should be their. – Both debugfs and procfs have scaling issues. • Moving to sysfs introduced the ability to send uevents – Item of most interest from LUG 2018 Linux Lustre client talk. – Both lctl conf_param and lctl set_param –P use this approach • lctl conf_param can set sysfs attributes without uevents. See class_modify_config() – We get life cycle events for free – udev is now involved. What do we get by using udev ? • Under the hood – uevents are collect by systemd and then processed by udev rules – /etc/udev/rules.d/99-lustre.rules – SUBSYSTEM=="lustre", ACTION=="change", ENV{PARAM}=="?*", RUN+="/usr/sbin/lctl set_param '$env{PARAM}=$env{SETTING}’” • You can create your own udev rule – http://reactivated.net/writing_udev_rules.html – /lib/udev/rules.d/* for examples – Add udev_log="debug” to /etc/udev.conf if you have problems • Using systemd for long task.
    [Show full text]
  • ECE 598 – Advanced Operating Systems Lecture 19
    ECE 598 { Advanced Operating Systems Lecture 19 Vince Weaver http://web.eece.maine.edu/~vweaver [email protected] 7 April 2016 Announcements • Homework #7 was due • Homework #8 will be posted 1 Why use FAT over ext2? • FAT simpler, easy to code • FAT supported on all major OSes • ext2 faster, more robust filename and permissions 2 btrfs • B-tree fs (similar to a binary tree, but with pages full of leaves) • overwrite filesystem (overwite on modify) vs CoW • Copy on write. When write to a file, old data not overwritten. Since old data not over-written, crash recovery better Eventually old data garbage collected • Data in extents 3 • Copy-on-write • Forest of trees: { sub-volumes { extent-allocation { checksum tree { chunk device { reloc • On-line defragmentation • On-line volume growth 4 • Built-in RAID • Transparent compression • Snapshots • Checksums on data and meta-data • De-duplication • Cloning { can make an exact snapshot of file, copy-on- write different than link, different inodles but same blocks 5 Embedded • Designed to be small, simple, read-only? • romfs { 32 byte header (magic, size, checksum,name) { Repeating files (pointer to next [0 if none]), info, size, checksum, file name, file data • cramfs 6 ZFS Advanced OS from Sun/Oracle. Similar in idea to btrfs indirect still, not extent based? 7 ReFS Resilient FS, Microsoft's answer to brtfs and zfs 8 Networked File Systems • Allow a centralized file server to export a filesystem to multiple clients. • Provide file level access, not just raw blocks (NBD) • Clustered filesystems also exist, where multiple servers work in conjunction.
    [Show full text]
  • [13주차] Sysfs and Procfs
    1 7 Computer Core Practice1: Operating System Week13. sysfs and procfs Jhuyeong Jhin and Injung Hwang Embedded Software Lab. Embedded Software Lab. 2 sysfs 7 • A pseudo file system provided by the Linux kernel. • sysfs exports information about various kernel subsystems, HW devices, and associated device drivers to user space through virtual files. • The mount point of sysfs is usually /sys. • sysfs abstrains devices or kernel subsystems as a kobject. Embedded Software Lab. 3 How to create a file in /sys 7 1. Create and add kobject to the sysfs 2. Declare a variable and struct kobj_attribute – When you declare the kobj_attribute, you should implement the functions “show” and “store” for reading and writing from/to the variable. – One variable is one attribute 3. Create a directory in the sysfs – The directory have attributes as files • When the creation of the directory is completed, the directory and files(attributes) appear in /sys. • Reference: ${KERNEL_SRC_DIR}/include/linux/sysfs.h ${KERNEL_SRC_DIR}/fs/sysfs/* • Example : ${KERNEL_SRC_DIR}/kernel/ksysfs.c Embedded Software Lab. 4 procfs 7 • A special filesystem in Unix-like operating systems. • procfs presents information about processes and other system information in a hierarchical file-like structure. • Typically, it is mapped to a mount point named /proc at boot time. • procfs acts as an interface to internal data structures in the kernel. The process IDs of all processes in the system • Kernel provides a set of functions which are designed to make the operations for the file in /proc : “seq_file interface”. – We will create a file in procfs and print some data from data structure by using this interface.
    [Show full text]
  • CIS 191 Linux Lab Exercise
    CIS 191 Linux Lab Exercise Lab 9: Backup and Restore Fall 2008 Lab 9: Backup and Restore The purpose of this lab is to explore the different types of backups and methods of restoring them. We will look at three utilities often used for backups and learn the advantages and disadvantages of each. Supplies • VMWare Server 1.05 or higher • Benji VM Preconfiguration • Labs 6 and Labs 7 completed on a pristine Benji VM [root@benji /]# fdisk -l Disk /dev/sda: 5368 MB, 5368709120 bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 382 3068383+ 83 Linux /dev/sda2 383 447 522112+ 82 Linux swap / Solaris /dev/sda3 448 511 514080 83 Linux /dev/sda4 512 652 1132582+ 5 Extended /dev/sda5 512 549 305203+ 83 Linux /dev/sda6 550 556 56196 83 Linux /dev/sda7 557 581 200781 83 Linux [root@benji /]# [root@benji /]# mount /dev/sda1 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sda5 on /opt type ext3 (rw) /dev/sda3 on /var type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/sda7 on /home type ext3 (rw,usrquota) [root@benji /]# cat /etc/fstab LABEL=/1 / ext3 defaults 1 1 devpts /dev/pts devpts gid=5,mode=620 0 0 tmpfs /dev/shm tmpfs defaults 0 0 LABEL=/opt /opt ext3 defaults 1 2 proc /proc proc defaults 0 0 sysfs /sys sysfs defaults 0 0 LABEL=/var /var ext3 defaults 1 2 LABEL=SWAP-sda2 swap swap defaults 0 0 LABEL=/home /home ext3 usrquota,defaults 1 2 [root@benji /]# Forum If you get stuck on one of the steps below don’t beat your head against the wall.
    [Show full text]
  • Connecting the Storage System to the Solaris Host
    Configuration Guide for Solaris™ Host Attachment Hitachi Virtual Storage Platform Hitachi Universal Storage Platform V/VM FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-96RD632-05 Copyright © 2010 Hitachi, Ltd., all rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd. (hereinafter referred to as “Hitachi”) and Hitachi Data Systems Corporation (hereinafter referred to as “Hitachi Data Systems”). Hitachi Data Systems reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. This document contains the most current information available at the time of publication. When new and/or revised information becomes available, this entire document will be updated and distributed to all registered users. All of the features described in this document may not be currently available. Refer to the most recent product announcement or contact your local Hitachi Data Systems sales office for information about feature and product availability. Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of the applicable Hitachi Data Systems agreement(s). The use of Hitachi Data Systems products is governed by the terms of your agreement(s) with Hitachi Data Systems. Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd.
    [Show full text]
  • Singularityce User Guide Release 3.8
    SingularityCE User Guide Release 3.8 SingularityCE Project Contributors Aug 16, 2021 CONTENTS 1 Getting Started & Background Information3 1.1 Introduction to SingularityCE......................................3 1.2 Quick Start................................................5 1.3 Security in SingularityCE........................................ 15 2 Building Containers 19 2.1 Build a Container............................................. 19 2.2 Definition Files.............................................. 24 2.3 Build Environment............................................ 35 2.4 Support for Docker and OCI....................................... 39 2.5 Fakeroot feature............................................. 79 3 Signing & Encryption 83 3.1 Signing and Verifying Containers.................................... 83 3.2 Key commands.............................................. 88 3.3 Encrypted Containers.......................................... 90 4 Sharing & Online Services 95 4.1 Remote Endpoints............................................ 95 4.2 Cloud Library.............................................. 103 5 Advanced Usage 109 5.1 Bind Paths and Mounts.......................................... 109 5.2 Persistent Overlays............................................ 115 5.3 Running Services............................................. 118 5.4 Environment and Metadata........................................ 129 5.5 OCI Runtime Support.......................................... 140 5.6 Plugins.................................................
    [Show full text]
  • Managing File Systems in Oracle® Solaris 11.4
    ® Managing File Systems in Oracle Solaris 11.4 Part No: E61016 November 2020 Managing File Systems in Oracle Solaris 11.4 Part No: E61016 Copyright © 2004, 2020, Oracle and/or its affiliates. License Restrictions Warranty/Consequential Damages Disclaimer This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. Warranty Disclaimer The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. Restricted Rights Notice If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial
    [Show full text]
  • Unionfs: User- and Community-Oriented Development of a Unification File System
    Unionfs: User- and Community-Oriented Development of a Unification File System David Quigley, Josef Sipek, Charles P. Wright, and Erez Zadok Stony Brook University {dquigley,jsipek,cwright,ezk}@cs.sunysb.edu Abstract If a file exists in multiple branches, the user sees only the copy in the higher-priority branch. Unionfs allows some branches to be read-only, Unionfs is a stackable file system that virtually but as long as the highest-priority branch is merges a set of directories (called branches) read-write, Unionfs uses copy-on-write seman- into a single logical view. Each branch is as- tics to provide an illusion that all branches are signed a priority and may be either read-only writable. This feature allows Live-CD develop- or read-write. When the highest priority branch ers to give their users a writable system based is writable, Unionfs provides copy-on-write se- on read-only media. mantics for read-only branches. These copy- on-write semantics have lead to widespread There are many uses for namespace unifica- use of Unionfs by LiveCD projects including tion. The two most common uses are Live- Knoppix and SLAX. In this paper we describe CDs and diskless/NFS-root clients. On Live- our experiences distributing and maintaining CDs, by definition, the data is stored on a read- an out-of-kernel module since November 2004. only medium. However, it is very convenient As of March 2006 Unionfs has been down- for users to be able to modify the data. Uni- loaded by over 6,700 unique users and is used fying the read-only CD with a writable RAM by over two dozen other projects.
    [Show full text]
  • We Get Letters Sept/Oct 2018
    SEE TEXT ONLY WeGetletters by Michael W Lucas letters@ freebsdjournal.org tmpfs, or be careful to monitor tmpfs space use. Hey, FJ Letters Dude, Not that you’ll configure your monitoring system Which filesystem should I use? to watch tmpfs, because it’s temporary. And no matter what, one day you’ll forget —FreeBSD Newbie that you used memory space as a filesystem. You’ll stash something vital in that temporary space, then reboot. And get really annoyed Dear FreeBSD Newbie, when that vital data vanishes into the ether. First off, welcome to FreeBSD. The wider com- Some other filesystems aren’t actively terrible. munity is glad to help you. The device filesystem devfs(5) provides device Second, please let me know who told you to nodes. Filesystems that can’t store user data are start off by writing me. I need to properly… the best filesystems. But then some clever sysad- “thank” them. min decides to hack on /etc/devfs.rules to Filesystems? Sure, let’s talk filesystems. change the standard device nodes for their spe- Discussing which filesystem is the worst is like cial application, or /etc/devd.conf to create or debating the merits of two-handed swords as reconfigure device nodes, and the whole system compared to lumberjack-grade chainsaws and goes down the tubes. industrial tulip presses. While every one of them Speaking of clever sysadmins, now and then has perfectly legitimate uses, in the hands of the people decide that they want to optimize disk novice they’re far more likely to maim everyone space or cut down how many copies of a file involved.
    [Show full text]
  • Comparison of Kernel and User Space File Systems
    Comparison of kernel and user space file systems — Bachelor Thesis — Arbeitsbereich Wissenschaftliches Rechnen Fachbereich Informatik Fakultät für Mathematik, Informatik und Naturwissenschaften Universität Hamburg Vorgelegt von: Kira Isabel Duwe E-Mail-Adresse: [email protected] Matrikelnummer: 6225091 Studiengang: Informatik Erstgutachter: Professor Dr. Thomas Ludwig Zweitgutachter: Professor Dr. Norbert Ritter Betreuer: Michael Kuhn Hamburg, den 28. August 2014 Abstract A file system is part of the operating system and defines an interface between OS and the computer’s storage devices. It is used to control how the computer names, stores and basically organises the files and directories. Due to many different requirements, such as efficient usage of the storage, a grand variety of approaches arose. The most important ones are running in the kernel as this has been the only way for a long time. In 1994, developers came up with an idea which would allow mounting a file system in the user space. The FUSE (Filesystem in Userspace) project was started in 2004 and implemented in the Linux kernel by 2005. This provides the opportunity for a user to write an own file system without editing the kernel code and therefore avoid licence problems. Additionally, FUSE offers a stable library interface. It is originally implemented as a loadable kernel module. Due to its design, all operations have to pass through the kernel multiple times. The additional data transfer and the context switches are causing some overhead which will be analysed in this thesis. So, there will be a basic overview about on how exactly a file system operation takes place and which mount options for a FUSE-based system result in a better performance.
    [Show full text]
  • State of the Art: Where We Are with the Ext3 Filesystem
    State of the Art: Where we are with the Ext3 filesystem Mingming Cao, Theodore Y. Ts’o, Badari Pulavarty, Suparna Bhattacharya IBM Linux Technology Center {cmm, theotso, pbadari}@us.ibm.com, [email protected] Andreas Dilger, Alex Tomas, Cluster Filesystem Inc. [email protected], [email protected] Abstract 1 Introduction Although the ext2 filesystem[4] was not the first filesystem used by Linux and while other filesystems have attempted to lay claim to be- ing the native Linux filesystem (for example, The ext2 and ext3 filesystems on Linux R are when Frank Xia attempted to rename xiafs to used by a very large number of users. This linuxfs), nevertheless most would consider the is due to its reputation of dependability, ro- ext2/3 filesystem as most deserving of this dis- bustness, backwards and forwards compatibil- tinction. Why is this? Why have so many sys- ity, rather than that of being the state of the tem administrations and users put their trust in art in filesystem technology. Over the last few the ext2/3 filesystem? years, however, there has been a significant amount of development effort towards making There are many possible explanations, includ- ext3 an outstanding filesystem, while retaining ing the fact that the filesystem has a large and these crucial advantages. In this paper, we dis- diverse developer community. However, in cuss those features that have been accepted in our opinion, robustness (even in the face of the mainline Linux 2.6 kernel, including direc- hardware-induced corruption) and backwards tory indexing, block reservation, and online re- compatibility are among the most important sizing.
    [Show full text]