Linux Multiqueue Networking

Total Page:16

File Type:pdf, Size:1020Kb

Linux Multiqueue Networking Linux Multiqueue Networking David S. Miller Basic Premise Linux Multiqueue Networking Receive Side Multiqueue Transmit Side David S. Miller Multiqueue Existing Design Red Hat Inc. New Design Implementation Netfilter Workshop, ESIEA, Paris, 2008 Notes NETWORKING PACKETS AS CPU WORK Linux Multiqueue Networking David S. Miller Basic Premise Each packet generates some CPU work Receive Side Multiqueue Both at protocol and application level Transmit Faster interfaces can generate more work Side Multiqueue Sophisticated networking can generate more work Existing Design Firewalling lookups, route lookups, etc. New Design Implementation Notes CPU EVOLUTION Linux Multiqueue Networking David S. Miller Basic Multiprocessor systems used to be uncommon Premise Receive Side This is changing... significantly Multiqueue Transmit Common systems will soon have 64 cpu threads, or Side Multiqueue more Existing Lots of CPU processing resources Design New Design The trouble is finding ways to use them Implementation Notes END TO END PRINCIPLE Linux Multiqueue Networking David S. Miller Basic premise: Communication protocol operations Basic Premise should be defined to occur at the end-points of a Receive Side communications system, or as close as possible to the Multiqueue resource being controlled Transmit Side Multiqueue The network is dumb Existing The end nodes are smart Design New Design Computational complexity is pushed as far to the edges Implementation as possible Notes END-TO-END APPLIED TO “SYSTEM” Linux Multiqueue Networking David S. Miller The end-to-end principle extends all the way into your Basic Premise computer Receive Side Each cpu in your system is an end node Multiqueue Transmit The “network” pushes work to your system Side Multiqueue Internally, work should be pushed down further to the Existing Design execution resources New Design With multi-threaded processors like Niagara, this is Implementation more important to get right than ever before Notes NETWORK DEVICE Linux Multiqueue Networking David S. Miller Devices must provide facilities to balance the load of Basic incoming network work Premise But they must do so without creating new problems Receive Side Multiqueue (such as reordering) Transmit Side Packet reordering looks like packet loss to TCP Multiqueue Existing Independant “work” should call upon independant cpus Design to process it New Design PCI MSI-X interrupts Implementation Notes Multiqueue network devices, with classification facilities SOFTWARE SOLUTIONS Linux Multiqueue Networking David S. Miller Basic Premise Not everyone will have multiqueue capable network Receive Side Multiqueue devices or systems. Transmit Receive balancing is possible in software Side Multiqueue Older systems can have their usefulness extended by Existing Design such facilities New Design Implementation Notes NAPI, “NEW API” Linux Multiqueue Basic abstraction for packet reception under Linux, Networking providing software interrupt mitigation and load David S. Miller balancing. Basic Device signals interrupt when receive packets are Premise available. Receive Side Multiqueue Driver disables chip interrupts, and NAPI context is Transmit queued up for packet processing Side Multiqueue Kernel generic networking processes NAPI context in Existing software interrupt context, asking for packets from the Design New Design driver one at a time Implementation When no more receive packets are pending, the NAPI Notes context is dequeued and the driver re-enabled chip interrupts Under high load with a weak cpu, we may process thousands of packets per hardware interrupt NAPI LIMITATIONS Linux Multiqueue Networking David S. Miller Basic It was not ready for multi-queue network devices Premise Receive Side One NAPI context exists per network device Multiqueue It assumes a network device only generates one Transmit Side interrupt source which must be disabled during a poll Multiqueue Existing Cannot handle cleanly the case of a one to many Design relationship between network devices and packet event New Design sources Implementation Notes FIXING NAPI Linux Multiqueue Networking David Implemented by Stephen Hemminger S. Miller Remove NAPI context from network device structure Basic Premise Let device driver define NAPI instances however it likes Receive Side in it’s private data structures Multiqueue Transmit Simple devices will define only one NAPI context Side Multiqueue Multiqueue devices will define a NAPI context for each Existing RX queue Design New Design All NAPI instances for a device execute independantly, Implementation no shared locking, no mutual exclusion Notes Independant cpus process independant NAPI contexts in parallel OK, WHAT ABOUT NON-MULTIQUEUE HARDWARE? Linux Multiqueue Networking David S. Miller Remote softirq block I/O layer changes by Jens Axboe Basic Provides a generic facility to execute software interrupt Premise Receive Side work on remote processors Multiqueue We can use this to simulate network devices with Transmit Side hardware multiqueue facilities Multiqueue Existing netif_receive_skb() is changed to hash the flow of a Design packet and use this to choose a remote cpu New Design Implementation We push the packet input processing to the selected Notes cpu CONFIGURABILITY Linux Multiqueue Networking David S. Miller Basic The default distribution works well for most people. Premise Receive Side Some people have special needs or want to control Multiqueue how work is distributed on their system Transmit Side Multiqueue Configurability of hardware facilities is variable Existing Some provide lots of control, some provide very little Design New Design We can fully control the software implementation Implementation Notes IMPETUS Linux Multiqueue Networking David S. Miller Basic Premise Writing NIU driver Receive Side Multiqueue Peter W’s multiqueue hacks Transmit Side TX multiqueue will be pervasive. Multiqueue Robert Olsson’s pending paper on 10GB routing (found Existing Design TX locking to be bottleneck in several situations) New Design Implementation Notes PRESENTATIONS Linux Multiqueue Networking David S. Miller Tokyo 2008 Basic Premise Berlin 2008 Receive Side Multiqueue Seattle 2008 Transmit Side Portland 2008 Multiqueue Implementation plan changing constantly Existing Design End result was different than all of these designs New Design Implementation Full implementation will be in 2.6.27 Notes PICTURE OF TX ENGINE Linux Multiqueue Networking David d S. Miller ev->qu eue_lo T ck X lo Basic ck Premise Receive Side Multiqueue dev_queue_xmit() -> QDISC hard_start_xmit Transmit Side Multiqueue set SKB queue mapping Existing Design DRIVER New Design Implementation Notes TXQ TXQ TXQ DETAILS TO NOTE Linux Multiqueue Networking David S. Miller Basic Premise Generic networking has no idea about multiple TX Receive Side queues Multiqueue Transmit Parallelization impossible because of single queue and Side Multiqueue TX lock Existing Only single root QDISC is possible, sharing is not Design allowed New Design Implementation Notes PICTURE OF DEFAULT CONFIGURATION Linux Multiqueue Networking David S. Miller qdisc Basic TX lock Premise TXQ ->q.lock Receive Side Multiqueue Transmit Side qdisc Multiqueue TXQ TX lock dev_queue_xmit ->q.lock driver Existing Design New Design TXQ Implementation qdisc TX lock Notes ->q.lock PICTURE WITH NON-TRIVIAL QDISC Linux Multiqueue Networking David S. Miller SKB Basic Premise TXQ TX lock Receive Side Multiqueue Transmit qdisc Side SKB TXQ TX lock Multiqueue ->q.lock driver Existing Design TXQ New Design TX lock Implementation Notes skb THE NETWORK DEVICE QUEUE Linux Multiqueue Networking David S. Miller Basic Premise Direction agnostic, can represent both RX and TX Receive Side Holds: Multiqueue Backpointer to struct net_device Transmit Side Pointer to ROOT qdisc for this queue, and sleeping Multiqueue qdisc Existing Design TX lock for ->hard_start_xmit() synchronization Flow control state bit (XOFF) New Design Implementation Notes QDISC CHANGES Linux Multiqueue Networking David S. Miller Basic Holds state bits for qdisc_run() scheduling Premise Has backpointer to referring dev_queue Receive Side Multiqueue Therefore, current configured QDISC root is always Transmit Side qdisc->dev_queue->qdisc_sleeping Multiqueue qdisc->q.lock on root is now used for tree Existing Design synchronization New Design qdisc->list of root qdisc replaces netdev->qdisc_list Implementation Notes TX QUEUE SELECTION Linux Multiqueue Networking David Simple hash function over the packet FLOW S. Miller information (source and destination address, source Basic and destination port) Premise Receive Side Driver or driver layer is allowed to override this queue Multiqueue selection function if the queues of the device are used Transmit Side for something other than flow seperation or prioritization Multiqueue Existing Example: Wireless Design In the future, this function will disappear. New Design Implementation To be replaced with socket hash marking of transmit Notes packets, and RX side hash recording of received packets for forwarding and bridging IMPLEMENTATION AND SEMANTIC ISSUES Linux Multiqueue Networking David S. Miller Basic Synchronization is difficult. Premise People want to use queues for other tasks such as Receive Side Multiqueue priority scheduling, will be supported by sch_multiq.c Transmit Side module from Intel in 2.6.28 Multiqueue TX multiqueue interactions with the packet scheduler Existing Design and it’s semantics are still not entirely clear New Design Most of the problems are about flow control Implementation Notes.
Recommended publications
  • Linux: Kernel Release Number, Part II
    Linux: Kernel Release Number, Part II Posted by jeremy on Friday, March 4, 2005 - 07:05 In the continued discussion on release numbering for the Linux kernel [story], Linux creator Linus Torvalds decided against trying to add meaning to the odd/even least significant number. Instead, the new plan is to go from the current 2.6.x numbering to a finer-grained 2.6.x.y. Linus will continue to maintain only the 2.6.x releases, and the -rc releases in between. Others will add trivial patches to create the 2.6.x.y releases. Linus cautions that the task of maintaining a 2.6.x.y tree is not going to be enjoyable: "I'll tell you what the problem is: I don't think you'll find anybody to do the parallell 'only trivial patches' tree. They'll go crazy in a couple of weeks. Why? Because it's a _damn_ hard problem. Where do you draw the line? What's an acceptable patch? And if you get it wrong, people will complain _very_ loudly, since by now you've 'promised' them a kernel that is better than the mainline. In other words: there's almost zero glory, there are no interesting problems, and there will absolutely be people who claim that you're a dick-head and worse, probably on a weekly basis." He went on to add, "that said, I think in theory it's a great idea. It might even be technically feasible if there was some hard technical criteria for each patch that gets accepted, so that you don't have the burn-out problem." His suggested criteria included limiting the patch to being 100 lines or less, and requiring that it fix an oops, a hang, or an exploitable security hole.
    [Show full text]
  • Open Source Software Notice
    OPEN SOURCE SOFTWARE NOTICE DCS Touch Display Software V2.00.XXX Schüco International KG Karolinenstraße 1-15 33609 Bielefeld OPEN SOURCE SOFTWARE NOTICE Seite 1 von 32 10000507685_02_EN OPEN SOURCE SOFTWARE NOTICE This document contains information about open source software for this product. The rights granted under open source software licenses are granted by the respective right holders. In the event of conflicts between SCHÜCO’S license conditions and the applicable open source licenses, the open source license conditions take precedence over SCHÜCO’S license conditions with regard to the respective open source software. You are allowed to modify SCHÜCO’S proprietary programs and to conduct reverse engineering for the purpose of debugging such modifications, to the extent such programs are linked to libraries licensed under the GNU Lesser General Public License. You are not allowed to distribute information resulting from such reverse engineering or to distribute the modified proprietary programs. The rightholders of the open source software require to refer to the following disclaimer, which shall apply with regard to those rightholders: Warranty Disclaimer THE OPEN SOURCE SOFTWARE IN THIS PRODUCT IS DISTRIBUTED ON AN "AS IS" BASIS AND IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY OF ANY KIND, WITHOUT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. SEE THE APPLICABLE LICENSES FOR MORE DETAILS. OPEN SOURCE SOFTWARE NOTICE Seite 2 von 32 10000507685_02_EN Copyright Notices and License Texts (please see the source code for all details) Software: iptables Copyright notice: Copyright (C) 1989, 1991 Free Software Foundation, Inc. Copyright Google, Inc.
    [Show full text]
  • Hardware-Driven Evolution in Storage Software by Zev Weiss A
    Hardware-Driven Evolution in Storage Software by Zev Weiss A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Sciences) at the UNIVERSITY OF WISCONSIN–MADISON 2018 Date of final oral examination: June 8, 2018 ii The dissertation is approved by the following members of the Final Oral Committee: Andrea C. Arpaci-Dusseau, Professor, Computer Sciences Remzi H. Arpaci-Dusseau, Professor, Computer Sciences Michael M. Swift, Professor, Computer Sciences Karthikeyan Sankaralingam, Professor, Computer Sciences Johannes Wallmann, Associate Professor, Mead Witter School of Music i © Copyright by Zev Weiss 2018 All Rights Reserved ii To my parents, for their endless support, and my cousin Charlie, one of the kindest people I’ve ever known. iii Acknowledgments I have taken what might be politely called a “scenic route” of sorts through grad school. While Ph.D. students more focused on a rapid graduation turnaround time might find this regrettable, I am glad to have done so, in part because it has afforded me the opportunities to meet and work with so many excellent people along the way. I owe debts of gratitude to a large cast of characters: To my advisors, Andrea and Remzi Arpaci-Dusseau. It is one of the most common pieces of wisdom imparted on incoming grad students that one’s relationship with one’s advisor (or advisors) is perhaps the single most important factor in whether these years of your life will be pleasant or unpleasant, and I feel exceptionally fortunate to have ended up iv with the advisors that I’ve had.
    [Show full text]
  • Achieving Keyless Cdns with Conclaves
    ARTIFACT EVALUATED Achieving Keyless CDNs with Conclaves PASSED Stephen Herwig Christina Garman Dave Levin University of Maryland Purdue University University of Maryland Abstract tamper with all of their customers, including virtually all of the world’s major banks, online shops, and many government Content Delivery Networks (CDNs) serve a large and in- sites. creasing portion of today’s web content. Beyond caching, The messy relationship between HTTPS and CDNs is CDNs provide their customers with a variety of services, in- made all the more challenging by the fact that CDNs today cluding protection against DDoS and targeted attacks. As the do far more than merely host the bulk of the web’s content. web shifts from HTTP to HTTPS, CDNs continue to provide They also use web application firewalls (WAFs) to analyze such services by also assuming control of their customers’ clients’ requests for evidence of targeted attacks like SQL private keys, thereby breaking a fundamental security princi- injection or cross-site scripting, and filter them before up- ple: private keys must only be known by their owner. loading to their customers [8]. CDN customers benefit from We present the design and implementation of Phoenix, the this service because it scrubs attack traffic far from their own first truly “keyless CDN”. Phoenix uses secure enclaves (in networks. And yet, running a WAF on a CDN requires the particular Intel SGX) to host web content, store sensitive key CDN to have access to the website’s unencrypted traffic. material, apply web application firewalls, and more on oth- There have been recent advances to address aspects of this erwise untrusted machines.
    [Show full text]
  • To FUSE Or Not to FUSE? Analysis and Performance Characterization of the FUSE User-Space File System Framework
    To FUSE or not to FUSE? Analysis and Performance Characterization of the FUSE User-Space File System Framework A Thesis Presented by Bharath Kumar Reddy Vangoor to The Graduate School in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science Stony Brook University Technical Report FSL-16-02 December 2016 Copyright by Bharath Kumar Reddy Vangoor 2016 Stony Brook University The Graduate School Bharath Kumar Reddy Vangoor We, the thesis committee for the above candidate for the Master of Science degree, hereby recommend acceptance of this thesis. Signature: Dr. Erez Zadok, Thesis Advisor Professor, Computer Science Signature: Dr. Mike Ferdman, Thesis Committee Chair Assistant Professor, Computer Science Signature: Dr. Vasily Tarasov IBM Research – Almaden This thesis is accepted by the Graduate School Charles Taber Dean of the Graduate School ii Abstract of the Thesis To FUSE or not to FUSE? Analysis and Performance Characterization of the FUSE User-Space File System Framework by Bharath Kumar Reddy Vangoor Master of Science in Computer Science Stony Brook University December 2016 Traditionally, file systems were implemented as part of operating systems kernels, which provide a limited set of tools and facilities to a programmer. As complexity of file systems grew, many new file systems began being developed in user space. Low performance is considered the main disadvan- tage of user-space file systems but the extent of this problem has never been explored systematically. As a result, the topic of user-space file systems remains rather controversial: while some consider user-space file systems a “toy” not to be used in production, others develop full-fledged production file systems in user space.
    [Show full text]
  • Reducing the Test Effort on AGL Kernels Together by Using LTSI and LTSI Test Framework
    Proposal : Reducing the Test Effort on AGL Kernels Together by Using LTSI and LTSI Test Framework Feb 26th, 2015 Fan Xin, Fujitsu Computer Technologies Limited 1316ka01 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED whoami In-House Embedded Linux Distributor of Fujitsu Our Distribution includes LTSI Kernel and is built with Yocto Project Our Distribution is used for IVI Server System Controller Storage System Network Equipment Printer etc. IVI:In-Vehicle Infotainment 1 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED Agenda Introduction of LTSI Kernel Test Cases for our distro Introduction of LTSI Test Project Proposal for AGL 2 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED Introduction of LTSI 3 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED Release Frequency Mainline Kernel Kernel Release Version Date Days of development 3.11 2013-09-02 64 3.12 2013-11-03 62 3.13 2014-01-19 77 3.14 2014-03-30 70 3.15 2014-06-08 70 3.16 2014-08-03 56 3.17 2014-10-05 63 3.18 2014-12-07 63 3.19 2015-02-09 65 Over time, kernel development cycles have slowly been getting shorter. The average cycle is just under 66 days. http://www.linuxfoundation.org/publications/linux-foundation/who-writes-linux-2015 4 Copyright 2015 FUJITSU COMPUTER TECHNOLOGIES LIMITED Stable Kernel Stable Kernel update ensures the patches will be made to fix the founded problems. Kernel Release Updates Fixes 3.10 65 4,008 3.11 10 688 3.12 36 3,969 3.13 11 908 3.14 29 2,563 3.15 10 701 3.16 7 876 3.17 8 890 3.18 3 252 Longterm Stable (LTS) There are usually several "longterm maintenance" kernel releases provided for the purposes of backporting bugfixes for older kernel trees.
    [Show full text]
  • Open Source Used in Cisco RV130/RV130W Firmware Version
    Open Source Used In Cisco RV130/RV130W 1.0.2.x Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Text Part Number: 78EE117C99-88290969 Open Source Used In Cisco RV130/RV130W 1.0.2.x 1 This document contains licenses and notices for open source software used in this product. With respect to the free/open source software listed in this document, if you have any questions or wish to receive a copy of any source code to which you may be entitled under the applicable free/open source license(s) (such as the GNU Lesser/General Public License), please contact us at [email protected]. In your requests please include the following reference number 78EE117C99-88290969 En ce qui a trait au logiciel gratuit ou à exploitation libre figurant dans ce document, si vous avez des questions ou souhaitez recevoir une copie du code source, auquel vous avez droit en vertu des licences gratuites ou d'exploitation libre applicables (telles que licences GNU Lesser/General Public), veuillez communiquer avec nous à l'adresse external- [email protected]. Dans vos demandes, veuillez inclure le numéro de référence 78EE117C99-88290969 Contents 1.1 bridge-utils 1.0.6 1.1.1 Available under license 1.2 BusyBox 1.7.2 1.2.1 Available under license 1.3 conntrack-tools 1.0.0 1.3.1 Available under license 1.4 cron 3.0 1.4.1 Available under license 1.5 curl 7.19.4 1.5.1 Available under license 1.6 dhcp 4.1.0 1.6.1
    [Show full text]
  • Using TCP/IP Traffic Shaping to Achieve Iscsi Service Predictability
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by NORA - Norwegian Open Research Archives UNIVERSITY OF OSLO Department of Informatics Using TCP/IP traffic shaping to achieve iSCSI service predictability Master thesis Jarle Bjørgeengen Oslo University College Oslo University/USIT May 26, 2010 Abstract This thesis reproduces the properties of load interference common in many storage devices using resource sharing for flexibility and maximum hardware utilization. The nature of resource sharing and load is studied and compared to assumptions and models used in previous work. The results are used to de- sign a method for throttling iSCSI initiators, attached to an iSCSI target server, using a packet delay module in Linux Traffic Control. The packet delay throttle enables close-to-linear rate reduction for both read and write operations. Ipt- ables and Ipset are used to add dynamic packet matching needed for rapidly changing throttling values. All throttling is achieved without triggering TCP retransmit timeout and subsequent slow start caused by packet loss. A control mechanism for dynamically adapting throttling values to rapidly changing workloads is implemented using a modified proportional integral derivative (PID) controller. Using experiments, control engineering filtering techniques and results from previous research, a suitable per resource saturation indicator was found. The indicator is an exponential moving average of the wait time of active resource consumers. It is used as input value to the PID controller man- aging the packet rates of resource consumers, creating a closed control loop managed by the PID controller. Finally a prototype of an autonomic resource prioritization framework is designed.
    [Show full text]
  • Reliable Writeback for Client-Side Flash Caches
    RELIABLE WRITEBACK FOR CLIENT-SIDE FLASH CACHES by Dai Qin A thesis submitted in conformity with the requirements for the degree of Master of Applied Science Graduate Department of Electrical and Computer Enginering University of Toronto c Copyright 2014 by Dai Qin Abstract Reliable Writeback for Client-side Flash Caches Dai Qin Master of Applied Science Graduate Department of Electrical and Computer Enginering University of Toronto 2014 Modern data centers are increasingly using shared storage solutions for ease of management. Data is cached on the client side on inexpensive and high-capacity flash devices, helping improve performance and reduce contention on the storage side. Currently, write-through caching is used because it ensures consistency and durability under client failures, but it offers poor performance for write-heavy workloads. In this work, we propose two write-back based caching policies, called write-back flush and write-back persist, that provide strong reliability guarantees, under two different client failure models. These policies rely on storage applications such as file systems and databases issuing write barriers to persist their data, because these barriers are the only reliable method for storing data durably on storage media. Our evaluation shows that these policies achieve performance close to write-back caching, while providing stronger guarantees than vanilla write-though caching. ii Acknowledgements I would like to thank my committee members, Professor Amr Helmy, Professor Ding Yuan and Professor Michael Stumm, for their valuable comments and positive feedback. I like to thank to my supervisors, Professor Ashvin Goel and Professor Angela Demke Brown, for their support, patience and encouragement.
    [Show full text]
  • DAMN: Overhead-Free IOMMU Protection for Networking
    DAMN: Overhead-Free IOMMU Protection for Networking Alex Markuzey; Igor Smolyary Adam Morrisonz Dan Tsafriry; yTechnion – Israel Institute of Technology zTel Aviv University VMware Research Group Abstract 1 Introduction DMA operations can access memory buffers only if they are Bandwidth demands of servers in modern data centers scale “mapped” in the IOMMU, so operating systems protect them- with the number of cores per server, requiring support for selves against malicious/errant network DMAs by mapping ever-growing networking speeds both at the endpoints and and unmapping each packet immediately before/after it is at the data center fabric. Google’s Jupiter fabric [36], for DMAed. This approach was recently found to be riskier and example, assumed 40 Gb/s endpoints, and 100 Gb/s network less performant than keeping packets non-DMAable and cards (NICs) have since been made available. Data center instead copying their content to/from permanently-mapped servers must also offer strong security guarantees to their buffers. Still, the extra copy hampers performance of multi- clients, one aspect of which includes protection from compro- gigabit networking. mised or malicious devices, a threat that is explicitly consid- We observe that achieving protection at the DMA (un)map ered in Google’s security design [18, Secure Data Storage]. boundary is needlessly constraining, as devices must be pre- Presently, however, an operating system (OS) cannot de- vented from changing the data only after the kernel reads fend against malicious NICs without paying a significant it. So there is no real need to switch ownership of buffers performance penalty. This limitation results from the use of between kernel and device at the DMA (un)mapping layer, I/O memory management units (IOMMUs) [2, 4, 19, 23] to as opposed to the approach taken by all existing IOMMU restrict device direct memory access (DMA), as unrestricted protection schemes.
    [Show full text]
  • Open Source Used in Rv130x V1.0.3.51
    Open Source Used In RV130 RV130W 1.0.3.51 Cisco Systems, Inc. www.cisco.com Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco website at www.cisco.com/go/offices. Text Part Number: 78EE117C99-202560422 Open Source Used In RV130 RV130W 1.0.3.51 1 This document contains licenses and notices for open source software used in this product. With respect to the free/open source software listed in this document, if you have any questions or wish to receive a copy of any source code to which you may be entitled under the applicable free/open source license(s) (such as the GNU Lesser/General Public License), please contact us at [email protected]. In your requests please include the following reference number 78EE117C99-202560422 En ce qui a trait au logiciel gratuit ou à exploitation libre figurant dans ce document, si vous avez des questions ou souhaitez recevoir une copie du code source, auquel vous avez droit en vertu des licences gratuites ou d'exploitation libre applicables (telles que licences GNU Lesser/General Public), veuillez communiquer avec nous à l'adresse external- [email protected]. Dans vos demandes, veuillez inclure le numéro de référence 78EE117C99-202560422 Contents 1.1 bridge-utils 1.0.6 1.1.1 Available under license 1.2 BusyBox 1.7.2 1.2.1 Available under license 1.3 conntrack-tools 1.0.0 1.3.1 Available under license 1.4 cron 3.0 1.4.1 Available under license 1.5 curl 7.19.4 1.5.1 Available under license 1.6 dhcp 4.1.0 1.6.1 Available
    [Show full text]
  • Understanding the Impact of Cache Locations on Storage Performance and Energy Consumption of Virtualization Systems
    Understanding the Impact of Cache Locations on Storage Performance and Energy Consumption of Virtualization Systems Tao Lu§, Ping Huang§, Xubin He§, Ming Zhang‡ § Virginia Commonwealth University, USA, ‡ EMC Corporation, USA flut2, phuang, [email protected], [email protected] Abstract ing in increased resource utilization [2]. Despite the management flexibilities, hypervisor-side As per-server CPU cores and DRAM capacity continu- cache access involves the costly I/O virtualization layers, ously increase, application density of virtualization plat- which cause intensive activities of the userspace process forms hikes. High application density imposes tremen- such as qemu, and incur considerable interrupt deliver- dous pressure on storage systems. Layers of caches are ies [6]. Study on virtio [7] with network transactions deployed to improve storage performance. Owing to its shows that a busy virtualized web-server may consume manageability and transparency advantages, hypervisor- 40% more energy, due to 5x more CPU cycles to deliver side caching is widely employed. However, hypervisor- a packet, than its non-virtualized counterparts [8]. Simi- side caches locate at the lower layer of VM disk filesys- larly, our tests on storage transactions show that I/O vir- tems. Thus, the critical path of cache access involves tualization caps hypervisor-side cache performance and the virtual I/O sub-path, which is expensive in operation increases per I/O energy consumption. In contrast, VM- latency and CPU cycles. The virtual I/O sub-path caps side caches may obviate the I/O virtualization overheads. the throughput (IOPS) of the hypervisor-side cache and More specifically, for 4KB read requests at the IOPS incurs additional energy consumption.
    [Show full text]