Rkt-Io: a Direct I/O Stack for Shielded Execution Jörg Thalheim1,2, Harshavardhan Unnibhavi1,2, Christian Priebe3, Pramod Bhatotia1,2, Peter Pietzuch3

Total Page:16

File Type:pdf, Size:1020Kb

Rkt-Io: a Direct I/O Stack for Shielded Execution Jörg Thalheim1,2, Harshavardhan Unnibhavi1,2, Christian Priebe3, Pramod Bhatotia1,2, Peter Pietzuch3 rkt-io: A Direct I/O Stack for Shielded Execution Jörg Thalheim1,2, Harshavardhan Unnibhavi1,2, Christian Priebe3, Pramod Bhatotia1,2, Peter Pietzuch3 1Technical University of Munich, Germany 2The University of Edinburgh, UK 3Imperial College London, UK Abstract and fault tolerance. At the same time, it increases the risk The shielding of applications using trusted execution of security violations when applications run in untrusted environments (TEEs) can provide strong security guarantees third-party cloud environments. Attackers (or even malicious in untrusted cloud environments. When executing I/O cloud administrators) can compromise the security of operations, today’s shielded execution frameworks, however, applications [72]. In fact, many studies show that software exhibit performance and security limitations: they assign bugs, configuration errors, and security vulnerabilities pose resources to the I/O path inefficiently, perform redundant serious threats to cloud systems, and software security is data copies, use untrusted host I/O stacks with security cited as a barrier to the adoption of cloud solutions [71]. risks and performance overheads. This prevents TEEs from Hardware-assisted trusted execution environments (TEEs), running modern I/O-intensive applications that require such as Intel SGX [35], ARM Trustzone [4], RISC-V Key- high-performance networking and storage. stone [49, 69], and AMD-SEV [3], offer an appealing way We describe rkt-io (pronounced “rocket I/O”), a direct user- to make cloud services more resilient against security space network and storage I/O stack specifically designed for attacks. TEEs provide a secure memory region that protects TEEs that combines high-performance, POSIX compatibility application code and data from other privileged layers in the and security. rkt-io achieves high I/O performance by system stack, including the OS kernel/hypervisor. Based on employing direct userspace I/O libraries (DPDK and SPDK) this promise, TEEs are now commercially offered by major inside the TEE for kernel-bypass I/O. For efficiency, rkt-io cloud computer providers, including Azure [54], Google [27], polls for I/O events directly, by interacting with the hardware and Alibaba [17]. instead of relying on interrupts, and it avoids data copies TEEs, however, introduce new challenges to meet the per- by mapping DMA regions in the untrusted host memory. formance requirements of modern I/O-intensive applications To maintain full Linux ABI compatibility, the userspace I/O that rely on high-performance networking hardware (e.g., libraries are integrated with userspace versions of the Linux >20 Gbps NICs) and storage (e.g., SSDs). Since TEEs are pri- VFS and network stacks inside the TEE. Since it omits the host marily designed to protect in-memory state, they only offer OS from the I/O path, does not suffer from host interface/Iago relatively expensive I/O support to interact with the untrusted attacks. Our evaluation with Intel SGX TEEs shows that rkt-io host environment [20]. Early designs relied on expensive syn- is 9× faster for networking and 7× faster for storage compared chronous world switches between the trusted and untrusted to host- (Scone) and LibOS-based (SGX-LKL) I/O approaches. domains for I/O calls, where a thread executing an I/O oper- ation must exit the TEE before issuing a host I/O system call. CCS Concepts: • Security and privacy ! Trusted com- Thisapproachincursprohibitiveoverheadsduetothesecurity puting; • Software and its engineering ! Operating sanitization of the CPU state including registers, TLBs, etc. systems. To overcome this limitation, more recent designs used by shielded execution frameworks (e.g., Scone [5], Eleos [62], 1 Introduction and SGX-LKL [66]) employ a switchless I/O model in which Cloud computing offers economies of scale for computational dedicated host I/O threads process I/O calls from TEE threads resources combined with the ease of management, elasticity, using shared memory queues. To avoid blocking TEE threads when waiting for I/O results, these frameworks employ Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies user-level threading libraries inside the TEE to execute I/O are not made or distributed for profit or commercial advantage and that calls asynchronously [76]. copies bear this notice and the full citation on the first page. Copyrights While such switchless asynchronous designs improve for components of this work owned by others than the author(s) must I/O performance over the strawman synchronous world be honored. Abstracting with credit is permitted. To copy otherwise, or switching design, current frameworks still exhibit significant republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. performance and security limitations: (1) they manage EuroSys ’21, April 26–28, 2021, Online, United Kingdom resources inefficiently by requiring dedicated I/O threads © 2021 Copyright held by the owner/author(s). Publication rights licensed outside the TEE, which incurs extra CPU cycles when busy to ACM. polling syscalls queues. These additional I/O threads also ACM ISBN 978-1-4503-8334-9/21/04...$15.00 require fine-grained performance tuning to determine their https://doi.org/10.1145/3447786.3456255 490 EuroSys ’21, April 26–28, 2021, Online, United Kingdom Thalheim et al. 100 read 800 14 write 80 12 600 10 60 8 400 Time [μs] 40 6 Throughput [Gbps] Throughput [MiB/s] 4 20 200 2 0 0 0 native sync async direct native sgx-lkl scone rkt-io native sgx-lkl scone rkt-io (a) System call latency with sendto() (b) Storage stack performance with fio (c) Network stack performance with iPerf Figure 1. Micro-benchmarks to showcase the performance of syscalls, storage and network stacks across different systems optimal number based on the application threads and I/O abstractions provided by a Linux-based LibOS (LKL [59]) in- workload; (2) they perform additional data copies between side the TEEs. This combination results in a high-performance the trusted and untrusted domains, and the indirection via I/O path, while preserving compatibility with off-the-shelf, shared memory queues significantly increases I/O latency; well-tested Linux filesystems and network protocol imple- (3) the untrusted host interface on the I/O path has security mentations inside the TEE. Since the I/O stack runs in the pro- and performance issues: the host interface is fundamentally tected domain of the TEE, rkt-io provides improved security, insecure [15, 81], and requires context switches, which are as it does not rely on information from the untrusted host OS. expensive for high-performance network and storage devices; The design of rkt-io embodies four principles to address and (4) they lack a universal and transparent mechanism to the aforementioned limitations of current frameworks: encrypt data on the I/O path. Instead, they rely on application- • rkt-io adopts a host-independent I/O interface to improve level encryption, which is potentially not comprehensive, performance and security. This interface leverages a and incompatible with full VM encryption models. direct I/O mechanism in the context of TEEs, where it To overcome these limitations, we argue for a fundamen- bypasses thehost OS when accessing external hardware tally different design point where we re-design the I/O stack devices. At the same time, it leverages a Linux-based based on direct userspace I/O in the context of TEEs. To exem- LibOS (LKL [59]) to provide full Linux compatibility. plify our design choice, we compare the direct I/O approach • rkt-io favors a polling-based approach for I/O event within TEEs with three alternative I/O approaches, measur- handling since TEEs do not provide an efficient way ing the performance of the sendto() syscall with 32-byte to receive interrupts on I/O events. UDP packets over a 40GbE link for (i) native (not secured), • rkt-io proposes a sensible I/O stack partitioning strategy (ii) synchronous and (iii) asynchronous syscalls within TEEs to efficiently utilize resources and eliminate spurious (secured). As Figure 1a shows, native system calls (16.4 µs) and data copies. It partitions the I/O stack by directly the direct I/O based approach (17.9 µs) take approximately the mapping the (encrypted) hardware DMA regions into same time, while we see higher per-packet processing time for untrusted memory outside the TEE, and runs the I/O the synchronous (91.7 µs) and asynchronous (96.7 µs) system stack within the TEE. calls. By bypassing the host I/O support, TEE I/O stacks can • rkt-io provides universal and transparent encryption in avoid performance overheads (and security limitations). the I/O stack to ensure the confidentiality and integrity Our design for a TEE I/O stack therefore has the following of data entering and leaving the TEE. It supports goals: (a) performance: we aim to provide near-native Layer 3 network packet encryption (based on Linux’s performance by accessing the I/O devices (NICs or SSDs) in-kernel Wireguard VPN [23]) for networking, and directly within the TEEs; (b) security: we aim to ensure strong full disk encryption (based on Linux’s dm-crypt device security guarantees, mitigating against OS-based Iago [15] mapper [22]) for storage. and host interface attacks [81]; and (c) compatibility: we Our evaluation with a range of micro-benchmarks aim to offer a complete POSIX/Linux ABI for applications and real-world applications shows that rkt-io provides without having to rewrite their I/O interface. better performance compared to Scone (a host-OS based To achieve these design goals, we describe rkt-io (pro- approach) and SGX-LKL (a LibOS-based approach). For nounced “rocket I/O”), an I/O stack for shielded execution example, the read/write bandwidth of rkt-io’s storage stack using Intel SGX TEEs.
Recommended publications
  • Linux: Kernel Release Number, Part II
    Linux: Kernel Release Number, Part II Posted by jeremy on Friday, March 4, 2005 - 07:05 In the continued discussion on release numbering for the Linux kernel [story], Linux creator Linus Torvalds decided against trying to add meaning to the odd/even least significant number. Instead, the new plan is to go from the current 2.6.x numbering to a finer-grained 2.6.x.y. Linus will continue to maintain only the 2.6.x releases, and the -rc releases in between. Others will add trivial patches to create the 2.6.x.y releases. Linus cautions that the task of maintaining a 2.6.x.y tree is not going to be enjoyable: "I'll tell you what the problem is: I don't think you'll find anybody to do the parallell 'only trivial patches' tree. They'll go crazy in a couple of weeks. Why? Because it's a _damn_ hard problem. Where do you draw the line? What's an acceptable patch? And if you get it wrong, people will complain _very_ loudly, since by now you've 'promised' them a kernel that is better than the mainline. In other words: there's almost zero glory, there are no interesting problems, and there will absolutely be people who claim that you're a dick-head and worse, probably on a weekly basis." He went on to add, "that said, I think in theory it's a great idea. It might even be technically feasible if there was some hard technical criteria for each patch that gets accepted, so that you don't have the burn-out problem." His suggested criteria included limiting the patch to being 100 lines or less, and requiring that it fix an oops, a hang, or an exploitable security hole.
    [Show full text]
  • Open Source Software Notice
    OPEN SOURCE SOFTWARE NOTICE DCS Touch Display Software V2.00.XXX Schüco International KG Karolinenstraße 1-15 33609 Bielefeld OPEN SOURCE SOFTWARE NOTICE Seite 1 von 32 10000507685_02_EN OPEN SOURCE SOFTWARE NOTICE This document contains information about open source software for this product. The rights granted under open source software licenses are granted by the respective right holders. In the event of conflicts between SCHÜCO’S license conditions and the applicable open source licenses, the open source license conditions take precedence over SCHÜCO’S license conditions with regard to the respective open source software. You are allowed to modify SCHÜCO’S proprietary programs and to conduct reverse engineering for the purpose of debugging such modifications, to the extent such programs are linked to libraries licensed under the GNU Lesser General Public License. You are not allowed to distribute information resulting from such reverse engineering or to distribute the modified proprietary programs. The rightholders of the open source software require to refer to the following disclaimer, which shall apply with regard to those rightholders: Warranty Disclaimer THE OPEN SOURCE SOFTWARE IN THIS PRODUCT IS DISTRIBUTED ON AN "AS IS" BASIS AND IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY OF ANY KIND, WITHOUT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. SEE THE APPLICABLE LICENSES FOR MORE DETAILS. OPEN SOURCE SOFTWARE NOTICE Seite 2 von 32 10000507685_02_EN Copyright Notices and License Texts (please see the source code for all details) Software: iptables Copyright notice: Copyright (C) 1989, 1991 Free Software Foundation, Inc. Copyright Google, Inc.
    [Show full text]
  • Hardware-Driven Evolution in Storage Software by Zev Weiss A
    Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had.
    [Show full text]
  • Achieving Keyless Cdns with Conclaves
    ARTIFACT EVALUATED Achieving Keyless CDNs with Conclaves PASSED Stephen Herwig Christina Garman Dave Levin University of Maryland Purdue University University of Maryland Abstract tamper with all of their customers, including virtually all of the world’s major banks, online shops, and many government Content Delivery Networks (CDNs) serve a large and in- sites. creasing portion of today’s web content. Beyond caching, The messy relationship between HTTPS and CDNs is CDNs provide their customers with a variety of services, in- made all the more challenging by the fact that CDNs today cluding protection against DDoS and targeted attacks. As the do far more than merely host the bulk of the web’s content. web shifts from HTTP to HTTPS, CDNs continue to provide They also use web application firewalls (WAFs) to analyze such services by also assuming control of their customers’ clients’ requests for evidence of targeted attacks like SQL private keys, thereby breaking a fundamental security princi- injection or cross-site scripting, and filter them before up- ple: private keys must only be known by their owner. loading to their customers [8]. CDN customers benefit from We present the design and implementation of Phoenix, the this service because it scrubs attack traffic far from their own first truly “keyless CDN”. Phoenix uses secure enclaves (in networks. And yet, running a WAF on a CDN requires the particular Intel SGX) to host web content, store sensitive key CDN to have access to the website’s unencrypted traffic. material, apply web application firewalls, and more on oth- There have been recent advances to address aspects of this erwise untrusted machines.
    [Show full text]
  • To FUSE Or Not to FUSE? Analysis and Performance Characterization of the FUSE User-Space File System Framework
    To FUSE or not to FUSE? Analysis and Performance Characterization of the FUSE User-Space File System Framework A Thesis Presented by Bharath Kumar Reddy Vangoor to The Graduate School in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science Stony Brook University Technical Report FSL-16-02 December 2016 Copyright by Bharath Kumar Reddy Vangoor 2016 Stony Brook University The Graduate School Bharath Kumar Reddy Vangoor We, the thesis committee for the above candidate for the Master of Science degree, hereby recommend acceptance of this thesis. Signature: Dr. Erez Zadok, Thesis Advisor Professor, Computer Science Signature: Dr. Mike Ferdman, Thesis Committee Chair Assistant Professor, Computer Science Signature: Dr. Vasily Tarasov IBM Research – Almaden This thesis is accepted by the Graduate School Charles Taber Dean of the Graduate School ii Abstract of the Thesis To FUSE or not to FUSE? Analysis and Performance Characterization of the FUSE User-Space File System Framework by Bharath Kumar Reddy Vangoor Master of Science in Computer Science Stony Brook University December 2016 Traditionally, file systems were implemented as part of operating systems kernels, which provide a limited set of tools and facilities to a programmer. As complexity of file systems grew, many new file systems began being developed in user space. Low performance is considered the main disadvan- tage of user-space file systems but the extent of this problem has never been explored systematically. As a result, the topic of user-space file systems remains rather controversial: while some consider user-space file systems a “toy” not to be used in production, others develop full-fledged production file systems in user space.
    [Show full text]
  • Reducing the Test Effort on AGL Kernels Together by Using LTSI and LTSI Test Framework
    Proposal : Reducing the Test Effort on AGL Kernels Together by Using LTSI and LTSI Test Framework Feb 26th, 2015 Fan Xin, Fujitsu Computer Technologies Limited 1316ka01 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED whoami In-House Embedded Linux Distributor of Fujitsu Our Distribution includes LTSI Kernel and is built with Yocto Project Our Distribution is used for IVI Server System Controller Storage System Network Equipment Printer etc. IVI:In-Vehicle Infotainment 1 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED Agenda Introduction of LTSI Kernel Test Cases for our distro Introduction of LTSI Test Project Proposal for AGL 2 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED Introduction of LTSI 3 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED Release Frequency Mainline Kernel Kernel Release Version Date Days of development 3.11 2013-09-02 64 3.12 2013-11-03 62 3.13 2014-01-19 77 3.14 2014-03-30 70 3.15 2014-06-08 70 3.16 2014-08-03 56 3.17 2014-10-05 63 3.18 2014-12-07 63 3.19 2015-02-09 65 Over time, kernel development cycles have slowly been getting shorter. The average cycle is just under 66 days. http://www.linuxfoundation.org/publications/linux-foundation/who-writes-linux-2015 4 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED Stable Kernel Stable Kernel update ensures the patches will be made to fix the founded problems. Kernel Release Updates Fixes 3.10 65 4,008 3.11 10 688 3.12 36 3,969 3.13 11 908 3.14 29 2,563 3.15 10 701 3.16 7 876 3.17 8 890 3.18 3 252 Longterm Stable (LTS) There are usually several "longterm maintenance" kernel releases provided for the purposes of backporting bugfixes for older kernel trees.
    [Show full text]
  • Open Source Used in Cisco RV130/RV130W Firmware Version
    Open Source Used In Cisco RV130/RV130W 1.0.2.x Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Text Part Number: 78EE117C99-88290969 Open Source Used In Cisco RV130/RV130W 1.0.2.x 1 This document contains licenses and notices for open source software used in this product. With respect to the free/open source software listed in this document, if you have any questions or wish to receive a copy of any source code to which you may be entitled under the applicable free/open source license(s) (such as the GNU Lesser/General Public License), please contact us at [email protected]. In your requests please include the following reference number 78EE117C99-88290969 En ce qui a trait au logiciel gratuit ou à exploitation libre figurant dans ce document, si vous avez des questions ou souhaitez recevoir une copie du code source, auquel vous avez droit en vertu des licences gratuites ou d'exploitation libre applicables (telles que licences GNU Lesser/General Public), veuillez communiquer avec nous à l'adresse external- [email protected]. Dans vos demandes, veuillez inclure le numéro de référence 78EE117C99-88290969 Contents 1.1 bridge-utils 1.0.6 1.1.1 Available under license 1.2 BusyBox 1.7.2 1.2.1 Available under license 1.3 conntrack-tools 1.0.0 1.3.1 Available under license 1.4 cron 3.0 1.4.1 Available under license 1.5 curl 7.19.4 1.5.1 Available under license 1.6 dhcp 4.1.0 1.6.1
    [Show full text]
  • Using TCP/IP Traffic Shaping to Achieve Iscsi Service Predictability
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by NORA - Norwegian Open Research Archives UNIVERSITY OF OSLO Department of Informatics Using TCP/IP traffic shaping to achieve iSCSI service predictability Master thesis Jarle Bjørgeengen Oslo University College Oslo University/USIT May 26, 2010 Abstract This thesis reproduces the properties of load interference common in many storage devices using resource sharing for flexibility and maximum hardware utilization. The nature of resource sharing and load is studied and compared to assumptions and models used in previous work. The results are used to de- sign a method for throttling iSCSI initiators, attached to an iSCSI target server, using a packet delay module in Linux Traffic Control. The packet delay throttle enables close-to-linear rate reduction for both read and write operations. Ipt- ables and Ipset are used to add dynamic packet matching needed for rapidly changing throttling values. All throttling is achieved without triggering TCP retransmit timeout and subsequent slow start caused by packet loss. A control mechanism for dynamically adapting throttling values to rapidly changing workloads is implemented using a modified proportional integral derivative (PID) controller. Using experiments, control engineering filtering techniques and results from previous research, a suitable per resource saturation indicator was found. The indicator is an exponential moving average of the wait time of active resource consumers. It is used as input value to the PID controller man- aging the packet rates of resource consumers, creating a closed control loop managed by the PID controller. Finally a prototype of an autonomic resource prioritization framework is designed.
    [Show full text]
  • Reliable Writeback for Client-Side Flash Caches
    RELIABLE WRITEBACK FOR CLIENT-SIDE FLASH CACHES by Dai Qin A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department of Electrical and Computer Enginering University of Toronto c Copyright 2014 by Dai Qin Abstract Reliable Writeback for Client-side Flash Caches Dai Qin Master of Applied Science Graduate Department of Electrical and Computer Enginering University of Toronto 2014 Modern data centers are increasingly using shared storage solutions for ease of management. Data is cached on the client side on inexpensive and high-capacity flash devices, helping improve performance and reduce contention on the storage side. Currently, write-through caching is used because it ensures consistency and durability under client failures, but it offers poor performance for write-heavy workloads. In this work, we propose two write-back based caching policies, called write-back flush and write-back persist, that provide strong reliability guarantees, under two different client failure models. These policies rely on storage applications such as file systems and databases issuing write barriers to persist their data, because these barriers are the only reliable method for storing data durably on storage media. Our evaluation shows that these policies achieve performance close to write-back caching, while providing stronger guarantees than vanilla write-though caching. ii Acknowledgements I would like to thank my committee members, Professor Amr Helmy, Professor Ding Yuan and Professor Michael Stumm, for their valuable comments and positive feedback. I like to thank to my supervisors, Professor Ashvin Goel and Professor Angela Demke Brown, for their support, patience and encouragement.
    [Show full text]
  • DAMN: Overhead-Free IOMMU Protection for Networking
    DAMN: Overhead-Free IOMMU Protection for Networking Alex Markuzey; Igor Smolyary Adam Morrisonz Dan Tsafriry; yTechnion – Israel Institute of Technology zTel Aviv University VMware Research Group Abstract 1 Introduction DMA operations can access memory buffers only if they are Bandwidth demands of servers in modern data centers scale “mapped” in the IOMMU, so operating systems protect them- with the number of cores per server, requiring support for selves against malicious/errant network DMAs by mapping ever-growing networking speeds both at the endpoints and and unmapping each packet immediately before/after it is at the data center fabric. Google’s Jupiter fabric [36], for DMAed. This approach was recently found to be riskier and example, assumed 40 Gb/s endpoints, and 100 Gb/s network less performant than keeping packets non-DMAable and cards (NICs) have since been made available. Data center instead copying their content to/from permanently-mapped servers must also offer strong security guarantees to their buffers. Still, the extra copy hampers performance of multi- clients, one aspect of which includes protection from compro- gigabit networking. mised or malicious devices, a threat that is explicitly consid- We observe that achieving protection at the DMA (un)map ered in Google’s security design [18, Secure Data Storage]. boundary is needlessly constraining, as devices must be pre- Presently, however, an operating system (OS) cannot de- vented from changing the data only after the kernel reads fend against malicious NICs without paying a significant it. So there is no real need to switch ownership of buffers performance penalty. This limitation results from the use of between kernel and device at the DMA (un)mapping layer, I/O memory management units (IOMMUs) [2, 4, 19, 23] to as opposed to the approach taken by all existing IOMMU restrict device direct memory access (DMA), as unrestricted protection schemes.
    [Show full text]
  • Open Source Used in Rv130x V1.0.3.51
    Open Source Used In RV130 RV130W 1.0.3.51 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Text Part Number: 78EE117C99-202560422 Open Source Used In RV130 RV130W 1.0.3.51 1 This document contains licenses and notices for open source software used in this product. With respect to the free/open source software listed in this document, if you have any questions or wish to receive a copy of any source code to which you may be entitled under the applicable free/open source license(s) (such as the GNU Lesser/General Public License), please contact us at [email protected]. In your requests please include the following reference number 78EE117C99-202560422 En ce qui a trait au logiciel gratuit ou à exploitation libre figurant dans ce document, si vous avez des questions ou souhaitez recevoir une copie du code source, auquel vous avez droit en vertu des licences gratuites ou d'exploitation libre applicables (telles que licences GNU Lesser/General Public), veuillez communiquer avec nous à l'adresse external- [email protected]. Dans vos demandes, veuillez inclure le numéro de référence 78EE117C99-202560422 Contents 1.1 bridge-utils 1.0.6 1.1.1 Available under license 1.2 BusyBox 1.7.2 1.2.1 Available under license 1.3 conntrack-tools 1.0.0 1.3.1 Available under license 1.4 cron 3.0 1.4.1 Available under license 1.5 curl 7.19.4 1.5.1 Available under license 1.6 dhcp 4.1.0 1.6.1 Available
    [Show full text]
  • Understanding the Impact of Cache Locations on Storage Performance and Energy Consumption of Virtualization Systems
    Understanding the Impact of Cache Locations on Storage Performance and Energy Consumption of Virtualization Systems Tao Lu§, Ping Huang§, Xubin He§, Ming Zhang‡ § Virginia Commonwealth University, USA, ‡ EMC Corporation, USA flut2, phuang, [email protected], [email protected] Abstract ing in increased resource utilization [2]. Despite the management flexibilities, hypervisor-side As per-server CPU cores and DRAM capacity continu- cache access involves the costly I/O virtualization layers, ously increase, application density of virtualization plat- which cause intensive activities of the userspace process forms hikes. High application density imposes tremen- such as qemu, and incur considerable interrupt deliver- dous pressure on storage systems. Layers of caches are ies [6]. Study on virtio [7] with network transactions deployed to improve storage performance. Owing to its shows that a busy virtualized web-server may consume manageability and transparency advantages, hypervisor- 40% more energy, due to 5x more CPU cycles to deliver side caching is widely employed. However, hypervisor- a packet, than its non-virtualized counterparts [8]. Simi- side caches locate at the lower layer of VM disk filesys- larly, our tests on storage transactions show that I/O vir- tems. Thus, the critical path of cache access involves tualization caps hypervisor-side cache performance and the virtual I/O sub-path, which is expensive in operation increases per I/O energy consumption. In contrast, VM- latency and CPU cycles. The virtual I/O sub-path caps side caches may obviate the I/O virtualization overheads. the throughput (IOPS) of the hypervisor-side cache and More specifically, for 4KB read requests at the IOPS incurs additional energy consumption.
    [Show full text]