When Ebpf Meets FUSE Improving the Performance of User File Systems

Total Page:16

File Type:pdf, Size:1020Kb

When Ebpf Meets FUSE Improving the Performance of User File Systems When eBPF Meets FUSE Improving the performance of user file systems Ashish Bijlani, PhD Student, Georgia Tech @ashishbijlani 1 In-Kernel vs User File Systems “People who think that “A lot of people once userspace filesystems are thought Linux and the realistic for anything but machines it ran on were toys are just misguided.” toys… Apparently I’m misguided.” - Linus Torvalds - Jeff Darcy 2 In-Kernel vs User File Systems • Examples • Examples – Ext4, OverlayFS, etc. – EncFS, Gluster, etc. • Pros • Pros – Improved security/ – Native performance reliability • Cons – Easy to develop/debug/ – Poor security/reliability maintain – Not easy to develop/ • Cons debug/maintain ––PoorPoor performance!performance! 3 File Systems in User Space (FUSE) struct fuse_lowlevel_ops ops { • State-of-the-art framework .lookup = handle_lookup, .access = NULL, – All file system handlers .getattr = handle_getattr, implemented in user space .setattr = handle_setattr, .open = handle_open, .read = handle_read, .readdir = handle_readdir, • Over 100+ FUSE file systems .write = handle_write, – Stackable: Android SDCardFS, // more handlers … EncFS, etc. .getxattr = handle_getxattr, .rename = handle_rename, – Network: GlusterFS, Ceph, .symlink = handle_symlink, Amazon S3FS, etc. .flush = NULL, } 4 FUSE Architecture 4’ FUSE Daemon Application Over the L I B F U S E network User 4 Stackable 1 Kernel VFS 2 3 5 FUSE Driver QUEUE 6 Lower FS (e.g., EXT4) 5 FUSE Performance • “tar xf linux-4.17.tar.xz” – Intel i5-3350 quad core, Ubuntu 16.04.4 LTS – Linux 4.11.0, LibFUSE commit # 386b1b Native FUSE 15 Overhead HDD: 78.8% 11 SSD: 81.1% 8 More noticeable Time (sec) 4 on faster media! 0 HDD SSD 6 FUSE Performance • “cd linux-4.17; make tinyconfig; make -j4” – Intel i5-3350 quad core, SSD, Ubuntu 16.04.4 LTS – Linux 4.11.0, LibFUSE commit # 386b1b 40 30 17.54% 20 overhead Time (sec) 10 0 Native FUSE 7 FUSE Architecture lookup() FUSE Daemon open(“/mnt/foo/bar”) getattr() Application setattr() L I B F U S E open() User 4 1 read() Kernel readdir() VFS write() … 2 lookup(“foo”) rename() symlink() 3 5 close() FUSE Driver getxattr() QUEUE C O N T E X T setxattr() S W I T C H 6 Lower FS (e.g., EXT4) 8 Write Read Releasedir 9 Readdir Opendir Unlink Mkdir Getxattr Release Open Create Setattr Rename Getattr “cd linux-4.17; make tinyconfig; make -j4” Lookup 400K Requests• received by300K FUSE daemon 200K 0K 100K # Requests # FUSE Optimizations • Big 128K writes – “-o max_write=131072” • Zero data copying for data I/O – “-o splice_read, splice_write, splice_move” • Leveraging VFS caches – Page cache for data I/O • “-o writeback_cache” – Dentry and Inode caches for lookup() and getattr() • “entry_timeout”, “attr_timeout” 10 FUSE Performance • “cd linux-4.17; make tinyconfig; make -j4” • Intel i5-3350 quad core, Ubuntu 16.04.4 LTS • Linux 4.11.0, LibFUSE commit # 386b1b 40 Opts Enabled 30 -o max_write=128K -o splice_read 20 -o splice_write Opts do not help much! -o splice_move Time (sec) 10 entry_timeout > 0 attr_timeout > 0 0 Native Regular Optimized 11 Write Read Releasedir 12 Optimized Readdir Opendir Unlink Regular Mkdir Getxattr Release Open Create Setattr 4x fewer lookup()s Rename Getattr “cd linux-4.17; make tinyconfig; make -j4” Lookup • 400K Requests received by300K FUSE daemon 200K 0K 100K # Requests # Requests received by FUSE daemon • “cd linux-4.17; make tinyconfig; make -j4” 400K Regular Optimized 3 1 2 300K 4 200K atime changes during read() invalidate # Requests 100K cached attributes 0K Lookup Getattr Rename Setattr Create Open Release Getxattr Mkdir Unlink Opendir Readdir Releasedir Read Write 13 Requests received by FUSE daemon • “cd linux-4.17; make tinyconfig; make -j4” 400K Regular Optimized 300K VFS issues getxattr() for each write() for 200K reading security labels # Requests 100K 2’ 1’ 0K Lookup Getattr Rename Setattr Create Open Release Getxattr Mkdir Unlink Opendir Readdir Releasedir Read Write 14 eBPF • Berkeley Packet Filter (BPF) – Pseudo machine architecture for packet filtering • eBPF enhances BPF – Evolved as a generic kernel extension framework – Used by tracing, perf, and network subsystems 15 eBPF Overview Clang/LLVM • Extensions written in C bytecode BPF C program • Compiled into BPF code syscall() • Code is verified and user loaded into kernel kernel • Execution under virtual Verifier machine runtime bpf virtual machine • Shared BPF maps with BPF Map sandbox user space key-value Kernel functions data struct 16 eBPF Simplified Example struct bpf_map_def map = { .type = BPF_MAP_TYPE_ARRAY, .key_size = sizeof(u32), .value_size = sizeof(u64), .max_entries = 1, // single element }; // tracepoint/syscalls/sys_enter_open int count_open(struct syscall *args) { u32 key = 0; u64 *val = bpf_map_lookup_elem(map, &key); if (val) __sync_fetch_and_add(val, 1); } 17 ExtFUSE: eBPF meets FUSE • Extension framework for File systems in User space – Register “thin” extensions - handle requests in kernel • Avoid user space context switch! – Share data between FUSE daemon and extensions using BPF maps • Cache metadata in the kernel 18 FUSE Architecture BPF Handlers FUSE Daemon Application L I B E x t F U S E L I B F U S E User 4 1 Kernel 7 0’ Cache Load VFS Meta- BPF Deliver req to 2 data Code extension 3’ 3 BPF VM FUSE Driver QUEUE BPF Map 4’ 5 Serve from 6 cache Lower FS (e.g., EXT4) 19 ExtFUSE Simplified Example struct bpf_map_def map = { .type = BPF_MAP_TYPE_HASH, .key_size = sizeof(u64), // ino (param 0) .value_size = sizeof(struct fuse_attr_out), .max_entries = MAX_NUM_ATTRS, // 2 << 16 }; // getattr() kernel extension - cache attrs int getattr(struct extfuse_args *args) { u32 key = bpf_extfuse_read(args, PARAM0); u64 *val = bpf_map_lookup_elem(map, &key); if (val) bpf_extfuse_write(args, PARAM0, val); } 20 ExtFUSE Simplified Example • Invalidate cached attrs from kernel extensions. E.g., // setattr() kernel extension - invalidate attrs int setattr(struct extfuse_args *args) { u32 key = bpf_extfuse_read(args, PARAM0); if (val) bpf_map_delete_elem(map, &key); } • Cache attrs from FUSE daemon – Insert into map on atime change • Similarly, cache lookup()s and xattr()s 21 ExtFUSE Performance • “cd linux-4.17; make tinyconfig; make -j4” • Intel i5-3350 quad core, SSD, UbuntuOverhead 16.04.4 LTS • Linux 4.11.0, LibFUSE commit # 386b1bRegular Latency: 17.54% 40 ExtFUSE Latency: 30 5.71% 20 ExtFUSE Memory: 50MB (worst case) Time (sec) 10 Cached 0 lookup, attr, xattr Native Regular Optimized ExtFUSE 22 Requests received by FUSE daemon • “cd linux-4.17; make tinyconfig; make -j4” 400K Regular Optimized ExtFUSE 300K Very few 200K Very few getattr()s getxattr()s # Requests 100K 0K Lookup Getattr Rename Setattr Create Open Release Getxattr Mkdir Unlink Opendir Readdir Releasedir Read Write 23 ExtFUSE Applications • BPF code to cache/invalidate meta-data in kernel – Applies potentially to all FUSE file systems – e.g., Gluster readdir ahead results could be cached • BPF code to perform custom filtering or perm checks – e.g., Android SDCardFS uid checks in lookup(), open() • BPF code to forward I/O requests to lower FS in kernel – e.g., install/remove target file descriptor in BPF map 24 ExtFUSE Status and References • Work in progress at Georgia Tech – Applying to Gluster, Ceph, EncFS, Android SDCardFS, etc. – Project page: https://extfuse.github.io • References – FUSE performance study by FSL, Stony Brook – IOVisor eBPF Project – BPF Compiler Collection (BCC) Toolchain 25 Thank You! [email protected] www.linkedin.com/in/ashishbijlani 26 27.
Recommended publications
  • A Study of Cryptographic File Systems in Userspace
    Turkish Journal of Computer and Mathematics Education Vol.12 No.10 (2021), 4507-4513 Research Article A study of cryptographic file systems in userspace a b c d e f Sahil Naphade , Ajinkya Kulkarni Yash Kulkarni , Yash Patil , Kaushik Lathiya , Sachin Pande a Department of Information Technology PICT, Pune, India [email protected] b Department of Information Technology PICT, Pune, India [email protected] c Department of Information Technology PICT, Pune, India [email protected] d Department of Information Technology PICT, Pune, India [email protected] e Veritas Technologies Pune, India, [email protected] f Department of Information Technology PICT, Pune, India [email protected] Article History: Received: 10 January 2021; Revised: 12 February 2021; Accepted: 27 March 2021; Published online: 28 April 2021 Abstract: With the advancements in technology and digitization, the data storage needs are expanding; along with the data breaches which can expose sensitive data to the world. Thus, the security of the stored data is extremely important. Conventionally, there are two methods of storage of the data, the first being hiding the data and the second being encryption of the data. However, finding out hidden data is simple, and thus, is very unreliable. The second method, which is encryption, allows for accessing the data by only the person who encrypted the data using his passkey, thus allowing for higher security. Typically, a file system is implemented in the kernel of the operating systems. However, with an increase in the complexity of the traditional file systems like ext3 and ext4, the ones that are based in the userspace of the OS are now allowing for additional features on top of them, such as encryption-decryption and compression.
    [Show full text]
  • Lightweight Virtualization with Gobolinux' Runner
    Lightweight virtualization with GoboLinux’ Runner Lucas C. Villa Real [email protected] About GoboLinux ● Alternative distribution born in 2002 ● Explores novel ideas in the Linux distribution ecosystem ● Introduces a rather diferent directory hierarchy How diferent? lucasvr@fedora ~] ls / bin dev home lib64 media opt root sbin sys usr boot etc lib lost+found mnt proc run srv tmp var lucasvr@fedora ~] ls /usr bin games include lib lib64 libexec local sbin share src tmp lucasvr@fedora ~] ls /usr/local bin etc games include lib lib64 libexec sbin share src lucasvr@gobolinux ~] ls / Data Mount Programs System Users GoboLinux File System Hierarchy /Programs Self-contained programs: no need for a package manager ~] ls /Programs AbsTk DifUtils GnuTLS Kerberos LibXML2 ACL Dit GoboHide Kmod LibXSLT Acpid DosFSTools GParted Lame Linux AGNClient E2FSProgs Gperf LCMS Linux-Firmware ALSA-Lib EFIBootMgr GPM Less Linux-PAM ALSA-Utils ELFUtils Grep LibDRM Lsof APR EncFS Grof LibEvdev Lua APR-Util ExFAT GRUB LibExif LuaRocks … /Programs Multiple versions of a given program can coexist ~] ls /Programs/GTK+ 2.24.22 2.24.30 3.10.6 3.21.4 Current Settings ~] ls /Programs/GTK+/2.24.22 bin doc include lib Resources share ~] ls /Programs/GTK+/2.24.22/bin gtk-builder-convert gtk-demo gtk-query-immodules2.0 gtk-update-icon-cache ~] ls /Programs/GTK+/2.24.30/bin gtk-builder-convert gtk-demo gtk-query-immodules2.0 gtk-update-icon-cache /Programs Easy to tell which fles belongs to which packages lucasvr@fedora ~] ls -l /bin/bash -rwxr-xr-x. 1 root root 1072008
    [Show full text]
  • Operating System Support for Run-Time Security with a Trusted Execution Environment
    Operating System Support for Run-Time Security with a Trusted Execution Environment - Usage Control and Trusted Storage for Linux-based Systems - by Javier Gonz´alez Ph.D Thesis IT University of Copenhagen Advisor: Philippe Bonnet Submitted: January 31, 2015 Last Revision: May 30, 2015 ITU DS-nummer: D-2015-107 ISSN: 1602-3536 ISBN: 978-87-7949-302-5 1 Contents Preface8 1 Introduction 10 1.1 Context....................................... 10 1.2 Problem....................................... 12 1.3 Approach...................................... 14 1.4 Contribution.................................... 15 1.5 Thesis Structure.................................. 16 I State of the Art 18 2 Trusted Execution Environments 20 2.1 Smart Cards.................................... 21 2.1.1 Secure Element............................... 23 2.2 Trusted Platform Module (TPM)......................... 23 2.3 Intel Security Extensions.............................. 26 2.3.1 Intel TXT.................................. 26 2.3.2 Intel SGX.................................. 27 2.4 ARM TrustZone.................................. 29 2.5 Other Techniques.................................. 32 2.5.1 Hardware Replication........................... 32 2.5.2 Hardware Virtualization.......................... 33 2.5.3 Only Software............................... 33 2.6 Discussion...................................... 33 3 Run-Time Security 36 3.1 Access and Usage Control............................. 36 3.2 Data Protection................................... 39 3.3 Reference
    [Show full text]
  • A Novel Cryptographic Framework for Cloud File Systems and Cryfs, a Provably-Secure Construction
    A Novel Cryptographic Framework for Cloud File Systems and CryFS, a Provably-Secure Construction Sebastian Messmer1, Jochen Rill2, Dirk Achenbach2, and J¨ornM¨uller-Quade3 1 [email protected] 2 FZI Forschungszentrum Informatik frill,[email protected] 3 Karlsruhe Institute of Technology (KIT) [email protected] Abstract. Using the cloud to store data offers many advantages for businesses and individuals alike. The cloud storage provider, however, has to be trusted not to inspect or even modify the data they are entrusted with. Encrypting the data offers a remedy, but current solutions have various drawbacks. Providers which offer encrypted storage themselves cannot necessarily be trusted, since they have no open implementation. Existing encrypted file systems are not designed for usage in the cloud and do not hide metadata like file sizes or directory structure, do not provide integrity, or are prohibitively inefficient. Most have no formal proof of security. Our contribution is twofold. We first introduce a comprehensive formal model for the security and integrity of cloud file systems. Second, we present CryFS, a novel encrypted file system specifically designed for usage in the cloud. Our file system protects confidentiality and integrity (including metadata), even in presence of an actively malicious cloud provider. We give a proof of security for these properties. Our implemen- tation is easy and transparent to use and offers performance comparable to other state-of-the-art file systems. 1 Introduction In recent years, cloud computing has transformed from a trend to a serious competition for traditional on-premise solutions. Elastic cost models and the availability of virtually infinite resources present an alternative to offers of a preset volume.
    [Show full text]
  • Encfs Goes Multi-User: Adding Access Control to an Encrypted File System
    c 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. http://ieeexplore.ieee.org/document/7860544/ EncFS goes Multi-User: Adding Access Control to an Encrypted File System Dominik Leibenger Jonas Fortmann Christoph Sorge CISPA, Saarland University University of Paderborn CISPA, Saarland University [email protected] [email protected] [email protected] Abstract—Among the different existing cryptographic file entities, which especially preserves the opportunity of creating systems, EncFS has a unique feature that makes it attractive for efficient, server-side snapshots if supported by the provider.1 backup setups involving untrusted (cloud) storage. It is a file- based overlay file system in normal operation (i.e., it maintains In contrast to other file-based encryption tools, EncFS a directory hierarchy by storing encrypted representations of has a unique feature: It allows to reverse its functionality files and folders in a specific source folder), but its reverse mode as to generate a deterministic, encrypted view of an existing allows to reverse this process: Users can mount deterministic, (unencrypted) folder on a local file system on the fly. The encrypted views of their local, unencrypted files on the fly, encrypted view can be synchronized to external, untrusted allowing synchronization to untrusted storage using standard cloud storage using standard tools like rsync [6] without hav- tools like rsync without having to store encrypted representations ing to store a local copy and without requiring changes to the on the local hard drive.
    [Show full text]
  • Lamassu: Storage-Efficient Host-Side Encryption
    Lamassu: Storage-Efficient Host-Side Encryption Peter Shah and Won So NetApp Inc. Abstract moves downstream through the stack. This strategy can Many storage customers are adopting encryption solu- take many forms, such as built-in application encryption, tions to protect critical data. Most existing encryption OS-based file system encryption or VM-level encryp- solutions sit in, or near, the application that is the source tion [3, 19, 22]. We term any encryption that runs on of critical data, upstream of the primary storage system. the same physical hardware as the primary application Placing encryption near the source ensures that data re- data-source encryption. mains encrypted throughout the storage stack, making it In general, existing data-source encryption solutions easier to use untrusted storage, such as public clouds. interfere with content-driven data management features Unfortunately, such a strategy also prevents down- provided by storage systems — in particular, deduplica- stream storage systems from applying content-based fea- tion. If a storage controller does not have access to the tures, such as deduplication, to the data. In this paper, we keys used to secure data, it cannot compare the contents present Lamassu, an encryption solution that uses block- of encrypted data to determine which sections, if any, are oriented, host-based, convergent encryption to secure duplicates. data, while preserving storage-based data deduplication. In this paper, we present an alternative encryption Unlike past convergent encryption systems, which typi- strategy that provides the benefits of upstream encryp- cally store encryption metadata in a dedicated store, our tion while preserving storage-based data deduplication system transparently inserts its metadata into each file’s on downstream storage.
    [Show full text]
  • Encrypted File System Based on Distributed Key-Value Stores and FUSE
    International Journal of Network Security & Its Applications (IJNSA) Vol.11, No.2, March 2019 KVEFS: Encrypted File System based on Distributed Key-Value Stores and FUSE Giau Ho Kim, Son Hai Le, Trung Manh Nguyen, Vu Thi Ly, Thanh Nguyen Kim, Nguyen Van Cuong, Thanh Nguyen Trung, and Ta Minh Thanh Le Quy Don Technical University No 236 Hoang Quoc Viet Street , Hanoi, Vietnam [email protected] Abstract. File System is an important component of a secure operating system. The need to build data protection systems is extremely important in open source operating systems, high mobility hardware systems, and miniaturization of storage devices that make systems available. It is clear that the value of the data is much larger than the value of the storage device. Computers access protection mechanism does not work if the thief retrieves the hard drive from the computer and reads data from it on another computer. Encrypted File System (EFS) is a secure level of operating system kernel. EFS uses cryptography to encrypt or decrypt files and folders when they are being saved or retrieved from a hard disk. EFS is often integrated transparently in operating system There are many encrypted filesystems commonly used in Linux operating systems. However, they have some limitations, which are the inability to hide the structure of the file system. This is a shortcoming targeted by the attacker, who will try to decrypt a file to find the key and then decrypt the entire file system. In this paper, we propose a new architecture of EFS called KVEFS which is based on cryptographic algorithms, FUSE library and key-value store.
    [Show full text]
  • Refuse: Userspace FUSE Reimplementation Using Puffs
    ReFUSE: Userspace FUSE Reimplementation Using puffs Antti Kantee Alistair Crooks Helsinki University of Technology The NetBSD Foundation [email protected].fi [email protected] Abstract for using Gmail as a file system storage backend and FUSEPod [5] which facilitaties iPod access and man- In an increasingly diverse and splintered world, agement. interoperability rules. The ability to leverage code Userspace file systems operate by attaching an in- written for another platform means more time and re- kernel file system to the kernel’s virtual file system sources for doing new and exciting research instead layer, vfs [17]. This component transforms incoming of reinventing the wheel. Interoperability requires requests into a form suitable for delivery to userspace, standards, and as the saying goes, the best part of sends the request to userspace, waits for a response, standards is that everyone can have their own. How- interprets the result and feeds it back to caller in the ever, in the userspace file system world, the Linux- kernel. The kernel file system calling conventions originated FUSE is the clear yardstick. dictate how to interface with the virtual file system In this paper we present ReFUSE, a userspace layer, but other than that the userspace file systems implementation of the FUSE interface on top of the framework is free to decide how to operate and what NetBSD native puffs (Pass-to-Userspace Framework kind of interface to provide to userspace. File System) userspace file systems framework. We While extending the operating system to userspace argue that an additional layer of indirection is the is not a new idea [12, 21], the FUSE [2] userspace right solution here, as it allows for a more natural file systems interface is the first to become a veri- export of the kernel file system interface instead of table standard.
    [Show full text]
  • SUSE Enterprise Storage 7 Security Hardening Guide Security Hardening Guide SUSE Enterprise Storage 7 by Tomáš Bažant, Alexandra Settle, and Liam Proven
    SUSE Enterprise Storage 7 Security Hardening Guide Security Hardening Guide SUSE Enterprise Storage 7 by Tomáš Bažant, Alexandra Settle, and Liam Proven Publication Date: 09/21/2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2020– 2021 SUSE LLC and contributors. All rights reserved. Except where otherwise noted, this document is licensed under Creative Commons Attribution-ShareAlike 4.0 International (CC-BY-SA 4.0): https://creativecommons.org/licenses/by-sa/4.0/legalcode . For SUSE trademarks, see http://www.suse.com/company/legal/ . All third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About this guide v 1 Available documentation v 2 Giving feedback vi 3 Documentation conventions vii 4 Product life cycle and support viii SUSE support definitions ix • Support statement for SUSE Enterprise Storage ix • Technology previews x 5 Ceph contributors x 6 Commands and command prompts used in this guide xi Salt-related commands xi • Ceph related commands xi • General Linux commands xiii • Additional information xiii I INTRODUCTION 1 1 Understanding the threat 2 2 About this guide 3 II HARDENING MEASSURES 5 3 General 6 3.1 Basic security hygiene 6 3.2 General system hardening 6 3.3 Monitoring 7 iii Security Hardening Guide 4 Network 8 5 Prevent Denial Of Service (DoS) 10 6 Authentication 12 6.1 Enabling strong authentication 12 6.2 Ensuring secure storage of keys 13 6.3 Account setup 13 7 Confidentiality 15 7.1 Data at rest 15 7.2 Data in flight 15 Glossary 18 iv Security Hardening Guide About this guide This guide focuses on how to ensure that your Ceph cluster is secure.
    [Show full text]
  • SADUS: Secure Data Deletion in User Space for Mobile Devices
    computers & security 77 (2018) 612–626 Available online at www.sciencedirect.com j o u r n a l h o m e p a g e : w w w . e l s e v i e r . c o m / l o c a t e / c o s e SADUS: Secure data deletion in user space for mobile devices ∗ Li Yang a, , Teng Wei a, Fengwei Zhang b, Jianfeng Ma a a Xidian University, Xi’an 710071, China b Wayne State University, Detroit, MI 48202, USA a r t i c l e i n f o a b s t r a c t Article history: Conventional data deletion is implemented for reclaiming storage as a rapid operation. How- Received 9 February 2018 ever, the content of the deleted file still persists on the storage medium. Secure data deletion Revised 27 April 2018 is a task of deleting data irrecoverably from the physical medium. Mobile devices use flash Accepted 22 May 2018 memory as the internal storage. However, flash memory does not support the in-place up- Available online 26 May 2018 date which is in direct opposition to efforts to securely delete sensitive data from storage. Previously practical secure deletion tools and techniques are rapidly becoming obsolete, and Keywords: are rendered ineffective. Therefore, research on secure data deletion approaches for mobile Secure deletion devices has become a practical and urgent issue. User space In this paper, we study the logic structure and operation characteristics of flash memory, and Encrypted filesystem survey related work on secure data deletion.
    [Show full text]
  • The Selinux Notebook - the Foundations
    The SELinux Notebook - The Foundations The SELinux Notebook The Foundations Page 1 The SELinux Notebook - The Foundations 0. Notebook Information 0.1 Copyright Information Copyright © 2009 Richard Haines. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled “GNUFree Documentation License”. The scripts and source code in this Notebook are covered by the GNU General Public License. The scripts and code are free source: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or any later version. These are distributed in the hope that they will be useful in researching SELinux, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with scripts and source code. If not, see <http://www.gnu.org/licenses/>. 0.2 Revision History Version Date Description of Change Changed By 1.0 20th Nov ‘09 First release. Richard Haines 0.3 Acknowledgements Logo designed by Máirín Duffy 0.4 Abbreviations Term Definition apol Policy analysis tool AV Access Vector AVC Access Vector Cache BLP Bell-La Padula CC Common Criteria CMW Compartmented Mode Workstation DAC Discretionary Access Control Page 2 The SELinux Notebook - The Foundations Term Definition F-10 Fedora 10 FLASK Flux Advanced Security Kernel - A security-enhanced version of the Fluke kernel and OS developed by the Utah Flux team and the US Department of Defence.
    [Show full text]
  • Security Features for UBIFS
    Security features for UBIFS Richard Weinberger sigma star gmbh . Richard Weinberger sigma star gmbh Security features for UBIFS /me ? Richard Weinberger ? Co-founder of sigma star gmbh ? Linux kernel developer and maintainer ? Strong focus on Linux kernel, lowlevel components, virtualization, security . Richard Weinberger sigma star gmbh Security features for UBIFS Disk Encryption on Linux ? Multiple existing solutions ? Kernel: dm-crypt, cryptoloop, eCryptFS, fscrypt ? Userspace: encFS, VeraCrypt, . Richard Weinberger sigma star gmbh Security features for UBIFS dm-crypt, cryptoloop ? Work on top of block devices ? Encrypt individual blocks ? Single key for every block ? Full file contents and metadata encryption ? Not suitable for MTD ? But stacking is possible: MTD, UBIFS, container file, dm-crypt/cryptoloop, block filesystem . Richard Weinberger sigma star gmbh Security features for UBIFS eCryptFS ? Stacked on top of your regular filesystem ? Works on filesystem-level (i.e. \sees" files) ? Individual key per file ? File contents and filename encryption ? Stacking filesystems can be problematic ? Also overhead in performance and memory ? Usable on MTD/UBIFS . Richard Weinberger sigma star gmbh Security features for UBIFS encFS, VeraCrypt, . ? Userspace filesystems ? Work on top of block devices ? Usually not what you want on (deeply) embedded systems . Richard Weinberger sigma star gmbh Security features for UBIFS fscrypt ? A better eCryptFS ? Fairly new, added for ext4 ? Encryption baked directly into filesystem (no stacking overhead) ? Intended to encrypt individual directories (e.g. home directory) ? Google use case: Android, ChromeOS . Richard Weinberger sigma star gmbh Security features for UBIFS Why no dm-crypt-like Solution for MTD? ? No individual key per inode ? On NAND, empty pages need special handling ? Stacking ? On a second thought it didn't feel right .
    [Show full text]