File Systems

Total Page:16

File Type:pdf, Size:1020Kb

File Systems File Systems Parallel Storage Systems 2021-04-20 Jun.-Prof. Dr. Michael Kuhn [email protected] Parallel Computing and I/O Institute for Intelligent Cooperating Systems Faculty of Computer Science Otto von Guericke University Magdeburg https://parcio.ovgu.de Outline Review File Systems Review Introduction Structure Example: ext4 Alternatives Summary Michael Kuhn File Systems 1 / 51 Storage Devices Review • Which hard-disk drive parameter is increasing at the slowest rate? 1. Capacity 2. Throughput 3. Latency 4. Density Michael Kuhn File Systems 2 / 51 Storage Devices Review • Which RAID level does not provide redundancy? 1. RAID 0 2. RAID 1 3. RAID 5 4. RAID 6 Michael Kuhn File Systems 2 / 51 Storage Devices Review • Which problem is called write hole? 1. Inconsistency due to non-atomic data/parity update 2. Incorrect parity calculation 3. Storage device failure during reconstruction 4. Partial stripe update Michael Kuhn File Systems 2 / 51 Outline Introduction File Systems Review Introduction Structure Example: ext4 Alternatives Summary Michael Kuhn File Systems 3 / 51 Motivation Introduction 1. File systems provide structure • File systems typically use a hierarchical organization • Hierarchy is built from files and directories • Other approaches: Tagging, queries etc. 2. File systems manage data and metadata • They are responsible for block allocation and management • Access is handled via file and directory names • Metadata includes access permissions, time stamps etc. • File systems use underlying storage devices • Devices can also be provided by storage arrays such as RAID • Common choices are Logical Volume Manager (LVM) and/or mdadm on Linux Michael Kuhn File Systems 4 / 51 Examples Introduction • Linux: tmpfs, ext4, XFS, btrfs, ZFS • Windows: FAT, exFAT, NTFS • OS X: HFS+, APFS • Universal: ISO9660, UDF • Pseudo: sysfs, proc Michael Kuhn File Systems 5 / 51 Examples... Introduction • Network: NFS, AFS, Samba • Cryptographic: EncFS, eCryptfs • Parallel distributed: Spectrum Scale, Lustre, OrangeFS, CephFS, GlusterFS • Most of these use another underlying file system Michael Kuhn File Systems 6 / 51 I/O Interfaces Introduction • I/O operations are realized using I/O interfaces • Interfaces are available for dierent abstraction levels • Interfaces forward operations to the actual file system • Low-level interfaces provide basic functionality • POSIX, MPI-IO • High-level interfaces provide more convenience • HDF, NetCDF, ADIOS Michael Kuhn File Systems 7 / 51 I/O Operations Introduction • open can be used to open and create files • Features many dierent flags and modes 1 fd = open("/path/to/file", • O_RDWR: Open for reading and writing 2 O_RDWR | O_CREAT | • O_CREAT: Create file if necessary 3 O_TRUNC, O_TRUNC • : Truncate if is exists already 4 S_IRUSR | S_IWUSR); • Initial access happens via a path 5 • Aerwards, file descriptors can be used (with a 6 rv = close(fd); few exceptions) 7 rv = unlink("/path/to/file"); • All functions provide a return value • errno should be checked in case of errors Michael Kuhn File Systems 8 / 51 I/O Operations... Introduction 1 nb = write(fd, data, sizeof(data)); • write returns the number of written bytes • Does not necessarily correspond to the given size (error handling!) • write updates the file pointer internally (alternative: pwrite) • Functions are provided by libc • libc performs system calls Michael Kuhn File Systems 9 / 51 Virtual File System (Switch) Introduction • VFS is a central file system component in the kernel • Provides a standardized interface for all file systems (POSIX) • Defines file system structure and interface for the most part • Forwards operations performed by applications to the corresponding file system • File system is selected based on the mount point • Enables supporting a wide range of dierent file systems • Applications are still portable due to POSIX Michael Kuhn File Systems 10 / 51 Virtual File System (Switch)... Introduction mmap (anonymous pages) Applications (processes) malloc ... stat(2) open(2) read(2) write(2) chmod(2) VFS Block-based FS Network FS Pseudo FS Special ext2 ext3 ext4 xfs NFS coda purpose FS Direct I/O proc sysfs Page btrfs ifs iso9660 smbfs ... (O_DIRECT) pipefs futexfs tmpfs ramfs cache gfs ocfs ... ceph usbfs ... devtmpfs Stackable FS ecryptfs overlayfs unionfs FUSE userspace (e.g. sshfs) network stackable (optional) struct bio - sector on disk BIOs (block I/Os) Devices on top of “normal” BIOs (block I/Os) - sector cnt block devices - bio_vec cnt drbd LVM - bio_vec index device mapper mdraid - bio_vec list dm-crypt dm-mirror ... dm-cache dm-thin bcache dm-raid dm-delay The Linux Storage Stack Diagram http://www.thomas-krenn.com/en/wiki/Linux_Storage_Stack_Diagram Created by Werner Fischer and Georg Schönberger License: CC-BY-SA 3.0, see http://creativecommons.org/licenses/by-sa/3.0/ [Fischer and Schönberger, 2017] Michael Kuhn File Systems 11 / 51 Virtual File System (Switch)... Introduction The Linux Storage Stack Diagram version 4.10, 2017-03-10 outlines the Linux storage stack as of Kernel version 4.10 Virtual Host Virtual FireWire Fibre Channel Fibre over Ethernet ISCSI Channel Fibre USB mmap (anonymous pages) Applications (processes) LIO malloc vfs_writev, vfs_readv, ... ... stat(2) open(2) read(2) write(2) chmod(2) VFS tcm_fc sbp_target tcm_usb_gadget tcm_vhost tcm_qla2xxx iscsi_target_mod Block-based FS Network FS Pseudo FS Special ext2 ext3 ext4 xfs NFS coda proc purpose FS target_core_mod Direct I/O sysfs Page btrfs ifs iso9660 smbfs ... (O_DIRECT) pipefs futexfs tmpfs ramfs cache target_core_file gfs ocfs ... ceph usbfs ... devtmpfs Stackable FS target_core_iblock ecryptfs overlayfs unionfs FUSE userspace (e.g. sshfs) target_core_pscsi target_core_user network stackable (optional) struct bio - sector on disk BIOs (block I/Os) Devices on top of “normal” BIOs (block I/Os) - sector cnt block devices - bio_vec cnt drbd LVM - bio_vec index device mapper mdraid - bio_vec list dm-crypt dm-mirror ... dm-cache dm-thin bcache libc dm-raid dm-delay • Applications call functions in userspace BIOs BIOs Block Layer BIOs I/O scheduler blkmq libc Maps BIOs to requests multi queue hooked in device drivers • performs system calls noop Software (they hook in like stacked cfq ... queues devices do) deadline Hardware Hardware dispatch ... dispatch queue queues • System calls are handled by VFS Request Request BIO based drivers based drivers based drivers Request-based device mapper targets • VFS determines correct file system instance dm-multipath SCSI mid layer sysfs scsi-mq /dev/zram* /dev/rbd* /dev/mmcblk*p* /dev/nullb* /dev/vd* /dev/rssd* /dev/skd* (transport attributes) SCSI upper level drivers /dev/sda /dev/sd* ... /dev/ubiblock* /dev/nbd* /dev/loop* /dev/nvme*n* /dev/rsxx* Transport classes scsi_transport_fc /dev/sr* /dev/st* zram ubi rbd nbd mmc loop null_blk virtio_blk mtip32xx nvme skd rsxx scsi_transport_sas scsi_transport_... • Data is read/written via page cache or directly network memory SCSI low level drivers libata megaraid_sas qla2xxx pm8001 iscsi_tcp virtio_scsi ufs ... ahci ata_piix ... aacraid lpfc mpt3sas vmw_pvscsi • Block layer handles communication with devices network HDD SSD DVD LSI Qlogic PMC-Sierra para-virtualized Micron nvme stec virtio_pci mobile device drive RAID HBA HBA SCSI flash memory PCIe card device device Adaptec Emulex LSI 12Gbs VMware's SD-/MMC-Card RAID HBA SAS HBA para-virtualized IBM flash Physical devices SCSI adapter The Linux Storage Stack Diagram http://www.thomas-krenn.com/en/wiki/Linux_Storage_Stack_Diagram Created by Werner Fischer and Georg Schönberger License: CC-BY-SA 3.0, see http://creativecommons.org/licenses/by-sa/3.0/ [Fischer and Schönberger, 2017] Michael Kuhn File Systems 12 / 51 Outline Structure File Systems Review Introduction Structure Example: ext4 Alternatives Summary Michael Kuhn File Systems 13 / 51 File System Objects Structure • Dierences from user and system point of view • Users deal with files and directories that contain data and metadata • Files consist of bytes, directories contain files and further directories • The system manages all internals • Combines individual blocks into files etc. • Inodes • The most basic data structure in POSIX file systems • Each file and directory is represented by an inode (see stat) • Inodes contain mostly metadata • Some of the metadata is visible for users, some is internal • Inodes are typically referenced by ID and have a fixed size Michael Kuhn File Systems 14 / 51 File System Objects... Structure • Files • Files contain data in the form of a byte array • POSIX specifies that data is a byte stream • Data can be read/written using explicit functions • Data can also be mapped into memory for implicit access • Directories • Directories organize the file system’s namespace • They can contain files and further directories • Directories within directories lead to a hierarchical namespace • From a user’s point of view, directories are a list of entries • Internally, file systems oen use tree structures Michael Kuhn File Systems 15 / 51 B-Tree vs. B+-Tree Structure 7 16 1 2 5 6 9 12 18 21 [CyHawk, 2010] • B-trees are generalized binary trees • It is optimized for systems that read/write large blocks • Pointers and data are mixed in the tree Michael Kuhn File Systems 16 / 51 B-Tree vs. B+-Tree... Structure 7 16 1 2 5 6 7 9 12 16 18 21 [CyHawk, 2010] • B+-trees are a modification of B-trees • Data is only stored in leaf nodes • Advantageous for caching since nodes are easier to cache • Used in NTFS, XFS etc. Michael Kuhn File Systems 17 / 51 Alternative Trees Structure • H-trees • Based on B-trees • Has dierent handling of hash collisions • Used in ext3 and ext4 • B"-trees • Optimized for write operations • Improved performance for insert, range query and update operations Michael Kuhn File Systems 18 / 51 Files Structure 1 nb = pwrite(fd, data, sizeof(data), 42); 2 nb = pread(fd, data, sizeof(data), 42); • pwrite and pread behave like write and read • They allow specifying the oset and do not modify the file pointer • Both functions are therefore thread-safe • Access is done via an open file descriptor • Can be used in parallel by multiple threads Michael Kuhn File Systems 19 / 51 Files..
Recommended publications
  • Userland Tools and Techniques for Board Bring up and Systems Integration
    Userland Tools and Techniques for Board Bring Up and Systems Integration ELC 2012 HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Agenda * Introduction * What? * Why bother with userland? * Common SoC interfaces * Typical Scenario * Kernel setup * GPIO/UART/I2C/SPI/Other * Questions HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Introduction * SoC offer a lot of integrated functionality * System designs differ by outside parts * Most mobile systems are SoC * "CPU boards" for SoCs * Available BSP for starting * Vendor or other sources * Common Unique components * Memory (RAM) * Storage ("flash") * IO * Displays HY Research LLC * Power supplies http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC What? * IO related items * I2C * SPI * UART * GPIO * USB HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Why userland? * Easier for non kernel savy * Quicker turn around time * Easier to debug * Often times available already Sample userland from BSP/LSP vendor * Kernel driver is not ready HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Common SoC interfaces Most SoC have these and more: * Pinmux * UART * GPIO * I2C * SPI * USB * Not discussed: Audio/Displays HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Typical Scenario Custom board: * Load code/Bring up memory * Setup memory controller for part used * Load Linux * Toggle lines on board to verify Prototype based on demo/eval board: * Start with board at a shell prompt * Get newly attached hw to work HY Research LLC http://www.hy-research.com/ Feb 5, 2012 (C) 2012 HY Research LLC Kernel Setup Additions to typical configs: * Enable UART support (typically done already) * Enable I2C support along with drivers for the SoC * (CONFIG_I2C + other) * Enable SPIdev * (CONFIG_SPI + CONFIG_SPI_SPIDEV) * Add SPIDEV to board file * Enable GPIO sysfs * (CONFIG_GPIO + other + CONFIG_GPIO_SYSFS) * Enable USB * Depends on OTG vs normal, etc.
    [Show full text]
  • Rootless Containers with Podman and Fuse-Overlayfs
    CernVM Workshop 2019 (4th June 2019) Rootless containers with Podman and fuse-overlayfs Giuseppe Scrivano @gscrivano Introduction 2 Rootless Containers • “Rootless containers refers to the ability for an unprivileged user (i.e. non-root user) to create, run and otherwise manage containers.” (https://rootlesscontaine.rs/ ) • Not just about running the container payload as an unprivileged user • Container runtime runs also as an unprivileged user 3 Don’t confuse with... • sudo podman run --user foo – Executes the process in the container as non-root – Podman and the OCI runtime still running as root • USER instruction in Dockerfile – same as above – Notably you can’t RUN dnf install ... 4 Don’t confuse with... • podman run --uidmap – Execute containers as a non-root user, using user namespaces – Most similar to rootless containers, but still requires podman and runc to run as root 5 Motivation of Rootless Containers • To mitigate potential vulnerability of container runtimes • To allow users of shared machines (e.g. HPC) to run containers without the risk of breaking other users environments • To isolate nested containers 6 Caveat: Not a panacea • Although rootless containers could mitigate these vulnerabilities, it is not a panacea , especially it is powerless against kernel (and hardware) vulnerabilities – CVE 2013-1858, CVE-2015-1328, CVE-2018-18955 • Castle approach : it should be used in conjunction with other security layers such as seccomp and SELinux 7 Podman 8 Rootless Podman Podman is a daemon-less alternative to Docker • $ alias
    [Show full text]
  • A Dynamic Bitmap for Huge File System in Sans
    A Dynamic Bitmap for Huge File System in SANs G.B.Kim*, D.J.Kang*, C.S.Park*, Y.J.Lee*, B.J.Shin** *Computer System Department, Computer and Software Technology Labs. ETRI(Electronics and Telecommunications Research Institute) 161 Gajong-Dong, Yusong-Gu, Daejon, 305-350, South Korea ** Dept. of Computer Engineering, Miryang National University 1025-1 Naei-dong Miryang Gyeongnam, South Korea Abstract: - A storage area network (SAN) is a high-speed special-purpose network (or subnetwork) that interconnects different kinds of data storage devices with associated data servers on behalf of a larger network of users. In SAN, computers service local file requests directly from shared storage devices. Direct device access eliminates the server machines as bottlenecks to performance and availability. Communication is unnecessary between computers, since each machine views the storage as being locally attached. SAN provides us to very large physical storage up to 64-bit address space, but traditional file systems can’t adapt to the file system for SAN because they have the limitation of scalability. In this paper we propose a new mechanism for file system using dynamic bitmap assignment. While traditional file systems rely on a fixed bitmap structures for metadata such as super block, inode, and directory entries, the proposed file system allocates bitmap and allocation area depend on file system features. Our approaches give a solution of the problem that the utilization of the file system depends on the file size in the traditional file systems. We show that the dynamic bitmap mechanism this improves the efficiency of disk usage in file system when compared to the conventional file systems.
    [Show full text]
  • Efficient Implementation of Data Objects in the OSD+-Based Fusion
    Efficient Implementation of Data Objects in the OSD+-Based Fusion Parallel File System Juan Piernas(B) and Pilar Gonz´alez-F´erez Departamento de Ingenier´ıa y Tecnolog´ıa de Computadores, Universidad de Murcia, Murcia, Spain piernas,pilar @ditec.um.es { } Abstract. OSD+s are enhanced object-based storage devices (OSDs) able to deal with both data and metadata operations via data and direc- tory objects, respectively. So far, we have focused on designing and implementing efficient directory objects in OSD+s. This paper, however, presents our work on also supporting data objects, and describes how the coexistence of both kinds of objects in OSD+s is profited to efficiently implement data objects and to speed up some commonfile operations. We compare our OSD+-based Fusion Parallel File System (FPFS) with Lustre and OrangeFS. Results show that FPFS provides a performance up to 37 better than Lustre, and up to 95 better than OrangeFS, × × for metadata workloads. FPFS also provides 34% more bandwidth than OrangeFS for data workloads, and competes with Lustre for data writes. Results also show serious scalability problems in Lustre and OrangeFS. Keywords: FPFS OSD+ Data objects Lustre OrangeFS · · · · 1 Introduction File systems for HPC environment have traditionally used a cluster of data servers for achieving high rates in read and write operations, for providing fault tolerance and scalability, etc. However, due to a growing number offiles, and an increasing use of huge directories with millions or billions of entries accessed by thousands of processes at the same time [3,8,12], some of thesefile systems also utilize a cluster of specialized metadata servers [6,10,11] and have recently added support for distributed directories [7,10].
    [Show full text]
  • Flexible Lustre Management
    Flexible Lustre management Making less work for Admins ORNL is managed by UT-Battelle for the US Department of Energy How do we know Lustre condition today • Polling proc / sysfs files – The knocking on the door model – Parse stats, rpc info, etc for performance deviations. • Constant collection of debug logs – Heavy parsing for common problems. • The death of a node – Have to examine kdumps and /or lustre dump Origins of a new approach • Requirements for Linux kernel integration. – No more proc usage – Migration to sysfs and debugfs – Used to configure your file system. – Started in lustre 2.9 and still on going. • Two ways to configure your file system. – On MGS server run lctl conf_param … • Directly accessed proc seq_files. – On MSG server run lctl set_param –P • Originally used an upcall to lctl for configuration • Introduced in Lustre 2.4 but was broken until lustre 2.12 (LU-7004) – Configuring file system works transparently before and after sysfs migration. Changes introduced with sysfs / debugfs migration • sysfs has a one item per file rule. • Complex proc files moved to debugfs • Moving to debugfs introduced permission problems – Only debugging files should be their. – Both debugfs and procfs have scaling issues. • Moving to sysfs introduced the ability to send uevents – Item of most interest from LUG 2018 Linux Lustre client talk. – Both lctl conf_param and lctl set_param –P use this approach • lctl conf_param can set sysfs attributes without uevents. See class_modify_config() – We get life cycle events for free – udev is now involved. What do we get by using udev ? • Under the hood – uevents are collect by systemd and then processed by udev rules – /etc/udev/rules.d/99-lustre.rules – SUBSYSTEM=="lustre", ACTION=="change", ENV{PARAM}=="?*", RUN+="/usr/sbin/lctl set_param '$env{PARAM}=$env{SETTING}’” • You can create your own udev rule – http://reactivated.net/writing_udev_rules.html – /lib/udev/rules.d/* for examples – Add udev_log="debug” to /etc/udev.conf if you have problems • Using systemd for long task.
    [Show full text]
  • CS 5600 Computer Systems
    CS 5600 Computer Systems Lecture 10: File Systems What are We Doing Today? • Last week we talked extensively about hard drives and SSDs – How they work – Performance characterisEcs • This week is all about managing storage – Disks/SSDs offer a blank slate of empty blocks – How do we store files on these devices, and keep track of them? – How do we maintain high performance? – How do we maintain consistency in the face of random crashes? 2 • ParEEons and MounEng • Basics (FAT) • inodes and Blocks (ext) • Block Groups (ext2) • Journaling (ext3) • Extents and B-Trees (ext4) • Log-based File Systems 3 Building the Root File System • One of the first tasks of an OS during bootup is to build the root file system 1. Locate all bootable media – Internal and external hard disks – SSDs – Floppy disks, CDs, DVDs, USB scks 2. Locate all the parEEons on each media – Read MBR(s), extended parEEon tables, etc. 3. Mount one or more parEEons – Makes the file system(s) available for access 4 The Master Boot Record Address Size Descripon Hex Dec. (Bytes) Includes the starEng 0x000 0 Bootstrap code area 446 LBA and length of 0x1BE 446 ParEEon Entry #1 16 the parEEon 0x1CE 462 ParEEon Entry #2 16 0x1DE 478 ParEEon Entry #3 16 0x1EE 494 ParEEon Entry #4 16 0x1FE 510 Magic Number 2 Total: 512 ParEEon 1 ParEEon 2 ParEEon 3 ParEEon 4 MBR (ext3) (swap) (NTFS) (FAT32) Disk 1 ParEEon 1 MBR (NTFS) 5 Disk 2 Extended ParEEons • In some cases, you may want >4 parEEons • Modern OSes support extended parEEons Logical Logical ParEEon 1 ParEEon 2 Ext.
    [Show full text]
  • Ext4 File System and Crash Consistency
    1 Ext4 file system and crash consistency Changwoo Min 2 Summary of last lectures • Tools: building, exploring, and debugging Linux kernel • Core kernel infrastructure • Process management & scheduling • Interrupt & interrupt handler • Kernel synchronization • Memory management • Virtual file system • Page cache and page fault 3 Today: ext4 file system and crash consistency • File system in Linux kernel • Design considerations of a file system • History of file system • On-disk structure of Ext4 • File operations • Crash consistency 4 File system in Linux kernel User space application (ex: cp) User-space Syscalls: open, read, write, etc. Kernel-space VFS: Virtual File System Filesystems ext4 FAT32 JFFS2 Block layer Hardware Embedded Hard disk USB drive flash 5 What is a file system fundamentally? int main(int argc, char *argv[]) { int fd; char buffer[4096]; struct stat_buf; DIR *dir; struct dirent *entry; /* 1. Path name -> inode mapping */ fd = open("/home/lkp/hello.c" , O_RDONLY); /* 2. File offset -> disk block address mapping */ pread(fd, buffer, sizeof(buffer), 0); /* 3. File meta data operation */ fstat(fd, &stat_buf); printf("file size = %d\n", stat_buf.st_size); /* 4. Directory operation */ dir = opendir("/home"); entry = readdir(dir); printf("dir = %s\n", entry->d_name); return 0; } 6 Why do we care EXT4 file system? • Most widely-deployed file system • Default file system of major Linux distributions • File system used in Google data center • Default file system of Android kernel • Follows the traditional file system design 7 History of file system design 8 UFS (Unix File System) • The original UNIX file system • Design by Dennis Ritche and Ken Thompson (1974) • The first Linux file system (ext) and Minix FS has a similar layout 9 UFS (Unix File System) • Performance problem of UFS (and the first Linux file system) • Especially, long seek time between an inode and data block 10 FFS (Fast File System) • The file system of BSD UNIX • Designed by Marshall Kirk McKusick, et al.
    [Show full text]
  • Globalfs: a Strongly Consistent Multi-Site File System
    GlobalFS: A Strongly Consistent Multi-Site File System Leandro Pacheco Raluca Halalai Valerio Schiavoni University of Lugano University of Neuchatelˆ University of Neuchatelˆ Fernando Pedone Etienne Riviere` Pascal Felber University of Lugano University of Neuchatelˆ University of Neuchatelˆ Abstract consistency, availability, and tolerance to partitions. Our goal is to ensure strongly consistent file system operations This paper introduces GlobalFS, a POSIX-compliant despite node failures, at the price of possibly reduced geographically distributed file system. GlobalFS builds availability in the event of a network partition. Weak on two fundamental building blocks, an atomic multicast consistency is suitable for domain-specific applications group communication abstraction and multiple instances of where programmers can anticipate and provide resolution a single-site data store. We define four execution modes and methods for conflicts, or work with last-writer-wins show how all file system operations can be implemented resolution methods. Our rationale is that for general-purpose with these modes while ensuring strong consistency and services such as a file system, strong consistency is more tolerating failures. We describe the GlobalFS prototype in appropriate as it is both more intuitive for the users and detail and report on an extensive performance assessment. does not require human intervention in case of conflicts. We have deployed GlobalFS across all EC2 regions and Strong consistency requires ordering commands across show that the system scales geographically, providing replicas, which needs coordination among nodes at performance comparable to other state-of-the-art distributed geographically distributed sites (i.e., regions). Designing file systems for local commands and allowing for strongly strongly consistent distributed systems that provide good consistent operations over the whole system.
    [Show full text]
  • A Study of Cryptographic File Systems in Userspace
    Turkish Journal of Computer and Mathematics Education Vol.12 No.10 (2021), 4507-4513 Research Article A study of cryptographic file systems in userspace a b c d e f Sahil Naphade , Ajinkya Kulkarni Yash Kulkarni , Yash Patil , Kaushik Lathiya , Sachin Pande a Department of Information Technology PICT, Pune, India [email protected] b Department of Information Technology PICT, Pune, India [email protected] c Department of Information Technology PICT, Pune, India [email protected] d Department of Information Technology PICT, Pune, India [email protected] e Veritas Technologies Pune, India, [email protected] f Department of Information Technology PICT, Pune, India [email protected] Article History: Received: 10 January 2021; Revised: 12 February 2021; Accepted: 27 March 2021; Published online: 28 April 2021 Abstract: With the advancements in technology and digitization, the data storage needs are expanding; along with the data breaches which can expose sensitive data to the world. Thus, the security of the stored data is extremely important. Conventionally, there are two methods of storage of the data, the first being hiding the data and the second being encryption of the data. However, finding out hidden data is simple, and thus, is very unreliable. The second method, which is encryption, allows for accessing the data by only the person who encrypted the data using his passkey, thus allowing for higher security. Typically, a file system is implemented in the kernel of the operating systems. However, with an increase in the complexity of the traditional file systems like ext3 and ext4, the ones that are based in the userspace of the OS are now allowing for additional features on top of them, such as encryption-decryption and compression.
    [Show full text]
  • CIS 191 Linux Lab Exercise
    CIS 191 Linux Lab Exercise Lab 9: Backup and Restore Fall 2008 Lab 9: Backup and Restore The purpose of this lab is to explore the different types of backups and methods of restoring them. We will look at three utilities often used for backups and learn the advantages and disadvantages of each. Supplies • VMWare Server 1.05 or higher • Benji VM Preconfiguration • Labs 6 and Labs 7 completed on a pristine Benji VM [root@benji /]# fdisk -l Disk /dev/sda: 5368 MB, 5368709120 bytes 255 heads, 63 sectors/track, 652 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 382 3068383+ 83 Linux /dev/sda2 383 447 522112+ 82 Linux swap / Solaris /dev/sda3 448 511 514080 83 Linux /dev/sda4 512 652 1132582+ 5 Extended /dev/sda5 512 549 305203+ 83 Linux /dev/sda6 550 556 56196 83 Linux /dev/sda7 557 581 200781 83 Linux [root@benji /]# [root@benji /]# mount /dev/sda1 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/sda5 on /opt type ext3 (rw) /dev/sda3 on /var type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) /dev/sda7 on /home type ext3 (rw,usrquota) [root@benji /]# cat /etc/fstab LABEL=/1 / ext3 defaults 1 1 devpts /dev/pts devpts gid=5,mode=620 0 0 tmpfs /dev/shm tmpfs defaults 0 0 LABEL=/opt /opt ext3 defaults 1 2 proc /proc proc defaults 0 0 sysfs /sys sysfs defaults 0 0 LABEL=/var /var ext3 defaults 1 2 LABEL=SWAP-sda2 swap swap defaults 0 0 LABEL=/home /home ext3 usrquota,defaults 1 2 [root@benji /]# Forum If you get stuck on one of the steps below don’t beat your head against the wall.
    [Show full text]
  • System Administration Storage Systems Agenda
    System Administration Storage Systems Agenda Storage Devices Partitioning LVM File Systems STORAGE DEVICES Single Disk RAID? RAID Redundant Array of Independent Disks Software vs. Hardware RAID 0, 1, 3, 5, 6 Software RAID Parity done by CPU FakeRAID Linux md LVM ZFS, btrfs ◦ Later Hardware RAID RAID controller card Dedicated hardware box Direct Attached Storage SAS interface Storage Area Network Fiber Channel iSCSI ATA-over-Ethernet Fiber Channel Network Attached Storage NFS CIFS (think Windows File Sharing) SAN vs. NAS PARTITIONING 1 File System / Disk? 2 TB maybe… 2TB x 12? 2TB x 128 then? Partitioning in Linux fdisk ◦ No support for GPT Parted ◦ GParted Fdisk Add Partition Delete Partition Save & Exit Parted Add Partition Change Units Delete Partition No need to save Any action you do is permanent Parted will try to update system partition table Script support parted can also take commands from command line: ◦ parted /dev/sda mkpart pri ext2 1Mib 10Gib Resize (Expand) 1. Edit partition table ◦ Delete and create with same start position 2. Reload partition table ◦ Reboot if needed 3. Expand filesystem Resize (Shrink) 1. Shrink filesystem ◦ Slightly smaller than final 2. Edit partition table ◦ Delete and create with same start position 3. Reload partition table ◦ Reboot if needed 4. Expand filesystem to fit partition No Partition Moving LOGICAL VOLUME MANAGER What is LVM? A system to manage storage devices Volume == Disk Why use LVM? Storage pooling Online resizing Resize any way Snapshots Concepts Physical Volume ◦ A disk or partition Volume Group ◦ A group of PVs Logical Volume ◦ A virtual disk/partition Physical Extent ◦ Data blocks of a PV Using a partition for LVM Best to have a partition table 1.
    [Show full text]
  • Connecting the Storage System to the Solaris Host
    Configuration Guide for Solaris™ Host Attachment Hitachi Virtual Storage Platform Hitachi Universal Storage Platform V/VM FASTFIND LINKS Document Organization Product Version Getting Help Contents MK-96RD632-05 Copyright © 2010 Hitachi, Ltd., all rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd. (hereinafter referred to as “Hitachi”) and Hitachi Data Systems Corporation (hereinafter referred to as “Hitachi Data Systems”). Hitachi Data Systems reserves the right to make changes to this document at any time without notice and assumes no responsibility for its use. This document contains the most current information available at the time of publication. When new and/or revised information becomes available, this entire document will be updated and distributed to all registered users. All of the features described in this document may not be currently available. Refer to the most recent product announcement or contact your local Hitachi Data Systems sales office for information about feature and product availability. Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of the applicable Hitachi Data Systems agreement(s). The use of Hitachi Data Systems products is governed by the terms of your agreement(s) with Hitachi Data Systems. Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd.
    [Show full text]