Secure Cloud Storage with Client-Side Encryption Using a Trusted Execution Environment

Total Page:16

File Type:pdf, Size:1020Kb

Secure Cloud Storage with Client-Side Encryption Using a Trusted Execution Environment Secure Cloud Storage with Client-side Encryption using a Trusted Execution Environment Marciano da Rocha1 a, Dalton Cezane´ Gomes Valadares2;4 b, Angelo Perkusich3 c, Kyller Costa Gorgonio4 d, Rodrigo Tomaz Pagno1 e and Newton Carlos Will1 f 1Department of Computer Science, Federal University of Technology, Parana,´ Dois Vizinhos, Brazil 2Department of Mechanical Engineering, Federal Institute of Pernambuco, Caruaru, Brazil 3Department of Electrical Engineering, Federal University of Campina Grande, Campina Grande, Brazil 4Department of Computer Science, Federal University of Campina Grande, Campina Grande, Brazil Keywords: Intel SGX, Data Sealing, File Encryption, Confidentiality, Integrity, Secure Storage, Cloud Storage. Abstract: With the evolution of computer systems, the amount of sensitive data to be stored as well as the number of threats on these data grow up, making the data confidentiality increasingly important to computer users. Currently, with devices always connected to the Internet, the use of cloud data storage services has become practical and common, allowing quick access to such data wherever the user is. Such practicality brings with it a concern, precisely the confidentiality of the data which is delivered to third parties for storage. In the home environment, disk encryption tools have gained special attention from users, being used on personal computers and also having native options in some smartphone operating systems. The present work uses the data sealing, feature provided by the Intel Software Guard Extensions (Intel SGX) technology, for file encryption. A virtual file system is created in which applications can store their data, keeping the security guarantees provided by the Intel SGX technology, before send the data to a storage provider. This way, even if the storage provider is compromised, the data are safe. To validate the proposal, the Cryptomator software, which is a free client-side encryption tool for cloud files, was integrated with an Intel SGX application (enclave) for data sealing. The results demonstrate that the solution is feasible, in terms of performance and security, and can be expanded and refined for practical use and integration with cloud synchronization services. 1 INTRODUCTION CEOs are concerned with the state of the cyber secu- rity of their company (PwC, 2016). Cyber attacks and different cybercrimes are an in- Nowadays, users are increasingly using cloud creasing threat to today’s digital society (Huang et al., storage services to keep their files and can access 2018; Khraisat et al., 2019; Singh et al., 2019). In them from any device connected to the Internet. Such this context data governance and security is of utmost storage services hold a large set of data from various importance (Onwujekwe et al., 2019). Reports from users, including sensitive and confidential informa- specialized companies and government security enti- tion, due to their reliance on such providers. However, ties indicate a growing number of threats to digital these storage services do not always guarantee or re- data, whether from corporations or users. Companies spect the privacy of their users (Branscombe, 2015; faced an average of 25 new threats per day in 2006, Cox, 2016; Clover, 2017). up from 500,000 in 2016 (Weafer, 2016) and 61% of One way to ensure an additional level of security for users’ sensitive files is to encrypt such data be- a https://orcid.org/0000-0002-8213-6803 fore sending them to the cloud storage server. One b https://orcid.org/0000-0003-1709-0404 option that performs such an operation is using a disk c https://orcid.org/0000-0002-7377-1258 encryption system, which runs in the operating sys- d https://orcid.org/0000-0001-9796-1382 tem background and encrypts all information that is e https://orcid.org/0000-0001-9267-4982 stored. f https://orcid.org/0000-0003-2976-4533 This type of system has some vulnerabilities, 31 da Rocha, M., Valadares, D., Perkusich, A., Gorgonio, K., Pagno, R. and Will, N. Secure Cloud Storage with Client-side Encryption using a Trusted Execution Environment. DOI: 10.5220/0009130600310043 In Proceedings of the 10th International Conference on Cloud Computing and Services Science (CLOSER 2020), pages 31-43 ISBN: 978-989-758-424-4 Copyright c 2020 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved CLOSER 2020 - 10th International Conference on Cloud Computing and Services Science which can highlight the risk of attacks and improper • The proposal to improve the security of disk/file access to data. One of these vulnerabilities is the encryption process using Intel SGX; user’s password choice, which is often weak and can • The implementation of such proposal consider- be discovered using dictionary attacks, default pass- ing an Intel SGX application integrated with the words, rainbow tables or even brute force. Another Cryptomator tool; factor is that most of the existing disk encryption sys- tems keep the encryption key in main memory while • The performance evaluation of our proposal im- it is running, allowing other malicious applications to plemented, considering six different hardware attack and discover the key. combinations. Nowadays the use of reliable trusted hardware The remainder of this paper is organized as fol- technologies, such as Intel Software Guard Exten- lows: Section 2 provides background information sions (Intel SGX) (INTEL, 2014) and ARM Trust- about the cloud storage and disk encryption tech- Zone (Pinto and Santos, 2019), has grown. An In- niques, as well the Intel Software Guard Extensions tel SGX application creates an isolated protected re- technology; Section 3 presents previous research gion of memory, which is called enclave, where data closely related to our proposal, in the field of cloud are securely processed. Among the various features storage security and the use of Intel SGX in file en- provided by the SGX technology, there is the seal- cryption; Section 4 describes the solution proposed by ing of an enclave data in secondary memory. This this paper; in order to validate the proposed approach, process uses a unique key as a basis for the encryp- three prototypes are presented in Section 5 as a proof tion, which is generated by combining a developer of concept; Section 6 presents the performance evalu- key and a processor key, and is not stored in mem- ation of the prototype on six different hardware com- ory: it is generated again to each request, guarantee- binations; the threat model as well the security eval- ing that the data will be decrypted only by the enclave uation of the Intel SGX technology and the proposal that sealed them, in the platform in which they were solution are presented in Section 7; the limitations of sealed (Anati et al., 2013). Another feature provided our solution are described in Section 8 and, finally, by Intel SGX technology is the encryption of data in a Section 9 concludes the paper and presents the future region of primary memory with a random key that is works. generated with each power cycle. This key is known only by the processor and never goes beyond its limits (INTEL, 2016). According to the previous discussion, we estab- 2 BACKGROUND lished the following two research questions to guide our research: This Section presents a brief description of important concepts used in this work, such as cloud storage, disk 1. How to improve the security of traditional disk en- encryption and the Intel Software Guard Extensions. cryption tools? 2. How to decrease the user’s need to trust the cloud 2.1 Cloud Storage service provider? In this paper, we propose the use of Intel SGX Nowadays, more users are using cloud storage ser- features to perform data sealing and processing in vices, either explicitly to keep their personal files, or a more secure way, since these operations are per- implicit through applications that make use of such formed only inside an enclave, which is supposed to background services to maintain backups or history be protected even against adversaries with high priv- of user data. The benefits inherent in cloud data stor- ileges inside the operating system. We implemented, age are virtually unlimited storage, version history for as a proof of concept, an application using the Cryp- each file, and access to data at any time from any de- tomator, an open source client-side encryption tool vice connected to the Internet. used for cloud files protection. The Cryptomator was But there is always a concern regarding the se- modified, integrated with an Intel SGX application, curity of data stored in the cloud, especially as the in order to use its data sealing feature, which is per- number of constant cyber attacks intensifies. One of formed only inside an enclave. With this, the data the most promising defenses for the cloud is encryp- encryption/decryption processes are performed in a tion, which offers robust protection in both data stor- more secure way, extending this security also for the age and transit. Encryption is available in two main keys, since they are also handled inside the enclave. security configurations: client- and server-side. The main contributions of this work are listed be- With server-side encryption, the cloud storage low: provider manages encryption keys along with their 32 Secure Cloud Storage with Client-side Encryption using a Trusted Execution Environment data, and many cloud storage providers use this failures in the encryption process, allowing the stored method. This limits the complexity of the environ- data to be extracted from the drive without knowing ment while maintaining the isolation of your data, but the key used to perform the encryption. there are several risks of keeping the encryption key Software-based FDE engines are applications run and data encrypted by it in the same place. In ad- at the kernel level by the CPU, in which data and dition, the cloud provider itself can access keys and used keys are stored in the main memory.
Recommended publications
  • Proceedings of the Linux Symposium
    Proceedings of the Linux Symposium Volume One June 27th–30th, 2007 Ottawa, Ontario Canada Contents The Price of Safety: Evaluating IOMMU Performance 9 Ben-Yehuda, Xenidis, Mostrows, Rister, Bruemmer, Van Doorn Linux on Cell Broadband Engine status update 21 Arnd Bergmann Linux Kernel Debugging on Google-sized clusters 29 M. Bligh, M. Desnoyers, & R. Schultz Ltrace Internals 41 Rodrigo Rubira Branco Evaluating effects of cache memory compression on embedded systems 53 Anderson Briglia, Allan Bezerra, Leonid Moiseichuk, & Nitin Gupta ACPI in Linux – Myths vs. Reality 65 Len Brown Cool Hand Linux – Handheld Thermal Extensions 75 Len Brown Asynchronous System Calls 81 Zach Brown Frysk 1, Kernel 0? 87 Andrew Cagney Keeping Kernel Performance from Regressions 93 T. Chen, L. Ananiev, and A. Tikhonov Breaking the Chains—Using LinuxBIOS to Liberate Embedded x86 Processors 103 J. Crouse, M. Jones, & R. Minnich GANESHA, a multi-usage with large cache NFSv4 server 113 P. Deniel, T. Leibovici, & J.-C. Lafoucrière Why Virtualization Fragmentation Sucks 125 Justin M. Forbes A New Network File System is Born: Comparison of SMB2, CIFS, and NFS 131 Steven French Supporting the Allocation of Large Contiguous Regions of Memory 141 Mel Gorman Kernel Scalability—Expanding the Horizon Beyond Fine Grain Locks 153 Corey Gough, Suresh Siddha, & Ken Chen Kdump: Smarter, Easier, Trustier 167 Vivek Goyal Using KVM to run Xen guests without Xen 179 R.A. Harper, A.N. Aliguori & M.D. Day Djprobe—Kernel probing with the smallest overhead 189 M. Hiramatsu and S. Oshima Desktop integration of Bluetooth 201 Marcel Holtmann How virtualization makes power management different 205 Yu Ke Ptrace, Utrace, Uprobes: Lightweight, Dynamic Tracing of User Apps 215 J.
    [Show full text]
  • Container-Based Virtualization for Byte-Addressable NVM Data Storage
    2016 IEEE International Conference on Big Data (Big Data) Container-Based Virtualization for Byte-Addressable NVM Data Storage Ellis R. Giles Rice University Houston, Texas [email protected] Abstract—Container based virtualization is rapidly growing Storage Class Memory, or SCM, is an exciting new in popularity for cloud deployments and applications as a memory technology with the potential of replacing hard virtualization alternative due to the ease of deployment cou- drives and SSDs as it offers high-speed, byte-addressable pled with high-performance. Emerging byte-addressable, non- volatile memories, commonly called Storage Class Memory or persistence on the main memory bus. Several technologies SCM, technologies are promising both byte-addressability and are currently under research and development, each with dif- persistence near DRAM speeds operating on the main memory ferent performance, durability, and capacity characteristics. bus. These new memory alternatives open up a new realm of These include a ReRAM by Micron and Sony, a slower, but applications that no longer have to rely on slow, block-based very large capacity Phase Change Memory or PCM by Mi- persistence, but can rather operate directly on persistent data using ordinary loads and stores through the cache hierarchy cron and others, and a fast, smaller spin-torque ST-MRAM coupled with transaction techniques. by Everspin. High-speed, byte-addressable persistence will However, SCM presents a new challenge for container-based give rise to new applications that no longer have to rely on applications, which typically access persistent data through slow, block based storage devices and to serialize data for layers of block based file isolation.
    [Show full text]
  • Detecting Exploit Code Execution in Loadable Kernel Modules
    Detecting Exploit Code Execution in Loadable Kernel Modules HaizhiXu WenliangDu SteveJ.Chapin Systems Assurance Institute Syracuse University 3-114 CST, 111 College Place, Syracuse, NY 13210, USA g fhxu02, wedu, chapin @syr.edu Abstract and pointer checks can lead to kernel-level exploits, which can jeopardize the integrity of the running kernel. Inside the In current extensible monolithic operating systems, load- kernel, exploitcode has the privilegeto interceptsystem ser- able kernel modules (LKM) have unrestricted access to vice routines, to modify interrupt handlers, and to overwrite all portions of kernel memory and I/O space. As a result, kernel data. In such cases, the behavior of the entire sys- kernel-module exploitation can jeopardize the integrity of tem may become suspect. the entire system. In this paper, we analyze the threat that Kernel-level protection is different from user space pro- comes from the implicit trust relationship between the oper- tection. Not every application-level protection mechanism ating system kernel and loadable kernel modules. We then can be applied directly to kernel code, because privileges present a specification-directed access monitoring tool— of the kernel environment is different from that of the user HECK, that detects kernel modules for malicious code ex- space. For example, non-executableuser page [21] and non- ecution. Inside the module, HECK prevents code execution executable user stack [29] use virtual memory mapping sup- on the kernel stack and the data sections; on the bound- port for pages and segments, but inside the kernel, a page ary, HECK restricts the module’s access to only those kernel or segment fault can lead to kernel panic.
    [Show full text]
  • DM-Relay - Safe Laptop Mode Via Linux Device Mapper
    ' $ DM-Relay - Safe Laptop Mode via Linux Device Mapper Study Thesis by cand. inform. Fabian Franz at the Faculty of Informatics Supervisor: Prof. Dr. Frank Bellosa Supervising Research Assistant: Dipl.-Inform. Konrad Miller Day of completion: 04/05/2010 &KIT – Universitat¨ des Landes Baden-Wurttemberg¨ und nationales Forschungszentrum in der Helmholtz-Gemeinschaft www.kit.edu % I hereby declare that this thesis is my own original work which I created without illegitimate help by others, that I have not used any other sources or resources than the ones indicated and that due acknowledgment is given where reference is made to the work of others. Karlsruhe, April 5th, 2010 Contents Deutsche Zusammenfassung xi 1 Introduction 1 1.1 Problem Definition . .1 1.2 Objectives . .1 1.3 Methodology . .1 1.4 Contribution . .2 1.5 Thesis Outline . .2 2 Background 3 2.1 Problems of Disk Power Management . .3 2.2 State of the Art . .4 2.3 Summary of this chapter . .8 3 Analysis 9 3.1 Pro and Contra . .9 3.2 A new approach . 13 3.3 Analysis of Proposal . 15 3.4 Summary of this chapter . 17 4 Design 19 4.1 Common problems . 19 4.2 System-Design . 21 4.3 Summary of this chapter . 21 5 Implementation of a dm-module for the Linux kernel 23 5.1 System-Architecture . 24 5.2 Log suitable for Flash-Storage . 28 5.3 Using dm-relay in practice . 31 5.4 Summary of this chapter . 31 vi Contents 6 Evaluation 33 6.1 Methodology . 33 6.2 Benchmarking setup .
    [Show full text]
  • Vnios Deployment on KVM January 2019
    Deployment Guide vNIOS deployment on KVM January 2019 TABLE OF CONTENTS Overview .............................................................................................................................................. 3 Introduction ........................................................................................................................................ 3 vNIOS for KVM ................................................................................................................................... 3 vNIOS deployment on KVM ................................................................................................................ 3 Preparing the environment ................................................................................................................. 3 Installing KVM and Bridge utilities ...................................................................................................... 3 Creating various types networks ........................................................................................................ 5 Downloading vNIOS QCOW2 image ................................................................................................. 8 Copying vNIOS qcow2 image .......................................................................................................... 10 Deploying vNIOS with Bridge Networking........................................................................................ 11 Deploying vNIOS through xml file with Bridge Networking .............................................................
    [Show full text]
  • Unix and Linux System Administration and Shell Programming
    Unix and Linux System Administration and Shell Programming Unix and Linux System Administration and Shell Programming version 56 of August 12, 2014 Copyright © 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2009, 2010, 2011, 2012, 2013, 2014 Milo This book includes material from the http://www.osdata.com/ website and the text book on computer programming. Distributed on the honor system. Print and read free for personal, non-profit, and/or educational purposes. If you like the book, you are encouraged to send a donation (U.S dollars) to Milo, PO Box 5237, Balboa Island, California, USA 92662. This is a work in progress. For the most up to date version, visit the website http://www.osdata.com/ and http://www.osdata.com/programming/shell/unixbook.pdf — Please add links from your website or Facebook page. Professors and Teachers: Feel free to take a copy of this PDF and make it available to your class (possibly through your academic website). This way everyone in your class will have the same copy (with the same page numbers) despite my continual updates. Please try to avoid posting it to the public internet (to avoid old copies confusing things) and take it down when the class ends. You can post the same or a newer version for each succeeding class. Please remove old copies after the class ends to prevent confusing the search engines. You can contact me with a specific version number and class end date and I will put it on my website. version 56 page 1 Unix and Linux System Administration and Shell Programming Unix and Linux Administration and Shell Programming chapter 0 This book looks at Unix (and Linux) shell programming and system administration.
    [Show full text]
  • Instant OS Updates Via Userspace Checkpoint-And
    Instant OS Updates via Userspace Checkpoint-and-Restart Sanidhya Kashyap, Changwoo Min, Byoungyoung Lee, and Taesoo Kim, Georgia Institute of Technology; Pavel Emelyanov, CRIU and Odin, Inc. https://www.usenix.org/conference/atc16/technical-sessions/presentation/kashyap This paper is included in the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16). June 22–24, 2016 • Denver, CO, USA 978-1-931971-30-0 Open access to the Proceedings of the 2016 USENIX Annual Technical Conference (USENIX ATC ’16) is sponsored by USENIX. Instant OS Updates via Userspace Checkpoint-and-Restart Sanidhya Kashyap Changwoo Min Byoungyoung Lee Taesoo Kim Pavel Emelyanov† Georgia Institute of Technology †CRIU & Odin, Inc. # errors # lines Abstract 50 1000K 40 100K In recent years, operating systems have become increas- 10K 30 1K 20 ingly complex and thus more prone to security and per- 100 formance issues. Accordingly, system updates to address 10 10 these issues have become more frequently available and 0 1 increasingly important. To complete such updates, users 3.13.0-x 3.16.0-x 3.19.0-x May 2014 must reboot their systems, resulting in unavoidable down- build/diff errors #layout errors Jun 2015 time and further loss of the states of running applications. #static local errors #num lines++ We present KUP, a practical OS update mechanism that Figure 1: Limitation of dynamic kernel hot-patching using employs a userspace checkpoint-and-restart mechanism, kpatch. Only two successful updates (3.13.0.32 34 and → which uses an optimized data structure for checkpoint- 3.19.0.20 21) out of 23 Ubuntu kernel package releases.
    [Show full text]
  • Introducing KVM for IBM Z Systems
    Introducing KVM for IBM z Systems Megan Hampton | [email protected] | © 2016 IBM Corporation 1 Key Terms Kernel The central module of an operating system (OS) QEMU (Quick Emulator) A free and open-source hosted hypervisor that performs hardware virtualization Hypervisor Also called a virtual machine manager, is a program that allows multiple operating systems to share a single hardware host Mainframe A large high-speed computer, especially one supporting numerous workstations or peripherals Operating System Software that manages all of the hardware resources associated with your desktop or laptop, server or mainframe. Megan Hampton | [email protected] | © 2016 IBM Corporation 2 Agenda Z Systems The Evolution of z Systems Linux History of Unix/Linux The Connection Benefits of KVM Standard management interface for KVM KVM's target audience The future of KVM I want KVM Megan Hampton | [email protected] | © 2016 IBM Corporation 3 z Systems “When IBM designed the original mainframe in 1964, it told its customers they would never have to change a line of code. That promise is still true today and remains a major value proposition for IBM customers.” – Lawrence Miller z Systems is a long line of mainframe computers – spanning over 50 years. What is z System ? z Systems is comprised of hardware, technological innovations, operating systems, and software designed for critical enterprise applications and processes. Megan Hampton | [email protected] | © 2016 IBM Corporation 4 Evolution of z Systems IBM introduced it’s 13th high end processor family based IBM unveiled the eServer on CMOS technology. z IBM introduced the zSeries 900 – the first System/360 (S/360) as a Systems (z13 zEnterprise) mainframe built from scratch and as a result IBM marked “new generation of electronic with e-business as its computing equipment”.
    [Show full text]
  • A Sledgehammer Approach to Reuse of Legacy Device Drivers
    A Sledgehammer Approach to Reuse of Legacy Device Drivers Joshua LeVasseur Volkmar Uhlig System Architecture Group University of Karlsruhe, Germany {jtl, volkmar}@ira.uka.de Abstract drivers to access their original OS’s internals, and they may freely modify data structures and invoke kernel functions. Device drivers account for the majority of an operating Thus the interface surface area between the device driver system’s code base, and reuse of the existing driver infras- and its original OS may be large. When emulated in a new tructure is a pragmatic requirement of any new OS project. environment, full semantics must be maintained. In many New operating systems should benefit from the existing de- cases the semantics are undocumented, are not specified, vice driver code base without demanding legacy support or even diverge from their specifications. Device drivers from the kernel. within the same class of devices may not even share the Instead of trying to directly integrate existing device same interface surface. Each re-used driver needs valida- drivers we propose a more radical approach. We run the tion in the new environment, and revalidation in response to unmodified device driver, with its complete original OS, iso- changes and updates to the original OS. lated in a virtual machine. Our flexible approach, requiring We propose an alternative approach for reuse of legacy only minimal support infrastructure, allows us to run any device drivers. We use the complete original OS itself as the existing device driver, independently of the OS or driver compatibility wrapper. The original OS effectively becomes vendor. an execution container for the driver.
    [Show full text]
  • Taming Hosted Hypervisors with (Mostly) Deprivileged Execution
    Taming Hosted Hypervisors with (Mostly) Deprivileged Execution Chiachih Wu †, Zhi Wang ∗, Xuxian Jiang † †Department of Computer Science ∗Department of Computer Science North Carolina State University Florida State University [email protected], [email protected] [email protected] Abstract systems (OSs) and greatly facilitate the adoption of virtual- ization. For example, KVM [22] is implemented as a load- Recent years have witnessed increased adoption of able kernel module that can be conveniently installed and hosted hypervisors in virtualized computer systems. By launched on a commodity host system without re-installing non-intrusively extending commodity OSs, hosted hypervi- the host system. Moreover, hosted hypervisors can read- sors can effectively take advantage of a variety of mature ily benefit from a variety of functionalities as well as latest and stable features as well as the existing broad user base of hardware support implemented in commodity OSs. As a re- commodity OSs. However, virtualizing a computer system sult, hosted hypervisors have been increasingly adopted in is still a rather complex task. As a result, existing hosted today’s virtualization-based computer systems [32]. hypervisors typically have a large code base (e.g., 33.6K Unfortunately, virtualizing a computer system with a SLOC for KVM), which inevitably introduces exploitable hosted hypervisor is still a complex and daunting task. De- software bugs. Unfortunately, any compromised hosted hy- spite the advances from hardware virtualization and the pervisor can immediately jeopardize the host system and leverage of various functionality in host OS kernels, a subsequently affect all running guests in the same physical hosted hypervisor remains a privileged driver that has a machine.
    [Show full text]
  • Linux Kernel Series
    Linux Kernel Series By Devyn Collier Johnson More Linux Related Stuff on: http://www.linux.org Linux Kernel – The Series by Devyn Collier Johnson (DevynCJohnson) [email protected] Introduction In 1991, a Finnish student named Linus Benedict Torvalds made the kernel of a now popular operating system. He released Linux version 0.01 on September 1991, and on February 1992, he licensed the kernel under the GPL license. The GNU General Public License (GPL) allows people to use, own, modify, and distribute the source code legally and free of charge. This permits the kernel to become very popular because anyone may download it for free. Now that anyone can make their own kernel, it may be helpful to know how to obtain, edit, configure, compile, and install the Linux kernel. A kernel is the core of an operating system. The operating system is all of the programs that manages the hardware and allows users to run applications on a computer. The kernel controls the hardware and applications. Applications do not communicate with the hardware directly, instead they go to the kernel. In summary, software runs on the kernel and the kernel operates the hardware. Without a kernel, a computer is a useless object. There are many reasons for a user to want to make their own kernel. Many users may want to make a kernel that only contains the code needed to run on their system. For instance, my kernel contains drivers for FireWire devices, but my computer lacks these ports. When the system boots up, time and RAM space is wasted on drivers for devices that my system does not have installed.
    [Show full text]
  • Downloaded for Free From
    The Design of the NetBSD I/O Subsystems SungWon Chung Pusan National University 2 This book is dedicated to the open-source code developers in the NetBSD community. The original copy of this publication is available in an electronic form and it can be downloaded for free from http://arXiv.org. Copyright (c) 2002 by SungWon Chung. For non-commercial and personal use, this publication may be reproduced, stored in a retrieval system, or transmitted in any form by any means, electronic, mechanical, photocopying, recording or otherwise. For commercial use, no part of this publication can be reproduced by any means without the prior written permission of the author. NetBSD is the registered trademark of The NetBSD Foundation, Inc. Contents Preface 14 I Basics to Learn Filesystem 15 1 Welcome to the World of Kernel ! 17 1.1 How Does a System Call Dive into Kernel from User Program ? . 17 1.1.1 Example: write system call . 17 1.1.2 Ultra SPARC 0x7c CPU Trap . 18 1.1.3 Jump to the File Descriptor Layer . 24 1.1.4 Arriving at Virtual Filesystem Operations . 28 1.2 General Data Structures in Kernel such as List, Hash, Queue, ... 30 1.2.1 Linked-Lists . 30 1.2.2 Tail Queues . 34 1.2.3 Hash . 38 1.3 Waiting and Sleeping in Kernel . 39 1.4 Kernel Lock Manager . 39 1.4.1 simplelock and lock . 39 1.4.2 Simplelock Interfaces . 40 1.4.3 Lock Interfaces . 40 1.5 Kernel Memory Allocation . 43 1.6 Resource Pool Manager .
    [Show full text]