Appendix References

Total Page:16

File Type:pdf, Size:1020Kb

Appendix References VU Research Portal On the design of reliable and scalable networked systems Hruby, T. 2016 document version Publisher's PDF, also known as Version of record Link to publication in VU Research Portal citation for published version (APA) Hruby, T. (2016). On the design of reliable and scalable networked systems. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal ? Take down policy If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. E-mail address: [email protected] Download date: 29. Sep. 2021 References [1] ASLR: Leopard versus Vista. http://blog.laconicsecurity.com/ 2008/01/aslr-leopard-versus-vista.html. [2] ARM - big.LITTLE Processing. http://www.arm.com/products/ processors/technologies/biglittleprocessing.php. [3] Killing the Big Kernel Lock. http://lwn.net/Articles/380174/. [4] Network transmit queue limits. http://lwn.net/Articles/454390/. [5] D-Bus. http://dbus.freedesktop.org. [6] The Heartbleed Bug. http://heartbleed.com/. [7] httperf. http://www.hpl.hp.com/research/linux/httperf/. [8] The unveiling of kdbus. http://lwn.net/Articles/580194/. [9] Intel’s "Knights Landing" Xeon Phi Coprocessor Detailed. http: //www.anandtech.com/show/8217/intels-knights-landing- coprocessor-detailed. [10] libevent. http://libevent.org. [11] Lighttpd Web Server. http://www.lighttpd.net/. [12] MINIX 3. http://www.minix3.org. [13] OpenOnload. http://www.openonload.org. [14] RCU Linux Usage. http://www.rdrop.com/users/paulmck/RCU/ linuxusage.html. [15] RFC: remove __read_mostly. http://lwn.net/Articles/262557. 133 134 CHAPTER 7. SUMMARY AND CONCLUSIONS [16] Intel Turbo Boost Technology in Intel Core Microarchitecture (Ne- halem) Based Processors. http://files.shareholder.com/ downloads/INTC/0x0x348508/C9259E98-BE06-42C8-A433- E28F64CB8EF2/TurboBoostWhitePaper.pdf. [17] Average Web Page Size Triples Since 2008, 2012. http://www. websiteoptimization.com/speed/tweak/average-web-page/. [18] What’s New for Windows Sockets. http://msdn.microsoft.com/en- us/library/windows/desktop/ms740642(v=vs.85).aspx. [19] The Intel Xeon Phi Coprocessor. http://www.intel.com/content/www/ us/en/processors/xeon/xeon-phi-detail.html. [20] QNX Neutrino RTOS System Architecture. http://support7.qnx.com/ download/download/14695/sys_arch.pdf. [21] Vulnerability in TCP/IP Could Allow Remote Code Execution. http:// technet.microsoft.com/en-us/security/bulletin/ms11-083. [22] Variable SMP - A Multi-Core CPU Architecture for Low Power and High Performance. http://www.nvidia.com/content/PDF/tegra_white_ papers/tegra-whitepaper-0911b.pdf. [23] Samsung to outline 8-core big.LITTLE ARM processor in Febru- ary. http://www.engadget.com/2012/11/20/samsung-to-outline- 8-core-big-little-arm-processor-in-february/. [24] Distributed Caching with Memcached. http://www.linuxjournal.com/ article/7451. [25] Nginx: the High-Performance Web Server and Reverse Proxy. http: //www.linuxjournal.com/magazine/nginx-high-performance- web-server-and-reverse-proxy. [26] Jonathan Appavoo, Dilma Da Silva, Orran Krieger, Marc Auslander, Michal Ostrowski, Bryan Rosenburg, Amos Waterland, Robert W. Wisniewski, Jimi Xenidis, Michael Stumm, and Livio Soares. Experience Distributing Objects in an SMMP OS. ACM Trans. Comput. Syst., 2007. [27] Raja Appuswamy, David C. van Moolenbroek, and Andrew S. Tanenbaum. Loris - A Dependable, Modular File-Based Storage Stack. Proceedings of the Pacific Rim International Symposium on Dependable Computing, 2010. [28] Jeff Arnold and M. Frans Kaashoek. Ksplice: Automatic Rebootless Kernel Updates. In Proceedings of the 4th ACM European Conference on Computer Systems, EuroSys ’09, 2009. [29] Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt, and Andrew Warfield. Xen and the Art of 135 Virtualization. In Proceedings of the Nineteenth ACM Symposium on Operat- ing Systems Principles, SOSP ’03, 2003. [30] Andrew Baumann, Gernot Heiser, Dilma Da Silva, Orran Krieger, Robert W. Wisniewski, and Jeremy Kerr. Providing Dynamic Update in an Operating System. In Proceedings of the USENIX Annual Technical Conference, 2005. [31] Andrew Baumann, Paul Barham, Pierre-Evariste Dagand, Tim Harris, Re- becca Isaacs, Simon Peter, Timothy Roscoe, Adrian Schüpbach, and Akhilesh Singhania. The Multikernel: A New OS Architecture for Scalable Multicore Systems. In Proceedings of the Symposium on Operating Systems Principles, 2009. [32] Michela Becchi and Patrick Crowley. Dynamic Thread Assignment on Me- terogeneous Multiprocessor Architectures. In Proceedings of the 3rd Confer- ence on Computing Frontiers, CF ’06, 2006. [33] Adam Belay, George Prekas, Ana Klimovic, Samuel Grossman, Christos Kozyrakis, and Edouard Bugnion. IX: A Protected Dataplane Operating Sys- tem for High Throughput and Low Latency. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), 2014. [34] Andrea Bittau, Adam Belay, Ali Mashtizadeh, David Mazieres, and Dan Boneh. Hacking Blind. In Proceedings of the IEEE Security and Privacy conference, Oakland, 2014. [35] Herbert Bos, Willem de Bruijn, Mihai Cristea, Trung Nguyen, and Georgios Portokalidis. FFPF: Fairly Fast Packet Filters. In Proceedings of the USENIX Conference on Operating Systems Design and Implementation, 2004. [36] Silas Boyd-Wickizer and Nickolai Zeldovich. Tolerating Malicious Device Drivers in Linux. In Proceedings of the USENIX Annual Technical Confer- ence, 2010. [37] Silas Boyd-Wickizer, Haibo Chen, Rong Chen, Yandong Mao, Frans Kaashoek, Robert Morris, Aleksey Pesterev, Lex Stein, Ming Wu, Yuehua Dai, Yang Zhang, and Zheng Zhang. Corey: An Operating System for Many Cores. In Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation, OSDI’08, Berkeley, CA, USA, 2008. USENIX Association. URL http://dl.acm.org/citation.cfm?id= 1855741.1855745. [38] Silas Boyd-Wickizer, Austin T. Clements, Yandong Mao, Aleksey Pesterev, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich. An Analysis of Linux Scalability to Many Cores. In Proceedings of the 9th USENIX Confer- ence on Operating Systems Design and Implementation, OSDI’10, Berkeley, 136 CHAPTER 7. SUMMARY AND CONCLUSIONS CA, USA, 2010. USENIX Association. [39] Silas Boyd-Wickizer, M. Frans Kaashoek, Robert Morris, and Nickolai Zel- dovich. Non-scalable locks are dangerous. In Proceedings of the Ottawa Linux Symposium, Ottawa, Canada, July 2012. [40] Miguel Castro, Manuel Costa, Jean-Philippe Martin, Marcus Peinado, Perik- lis Akritidis, Austin Donnelly, Paul Barham, and Richard Black. Fast Byte- granularity Software Fault Isolation. In Proceedings of the 22nd ACM SIGOPS Symposium on Operating Systems Principles, 2009. [41] Andy Chou, Junfeng Yang, Benjamin Chelf, Seth Hallem, and Dawson En- gler. An Empirical Study of Operating Systems Errors. In Proceedings of the Eighteenth ACM Symposium on Operating Systems Principles, SOSP ’01, 2001. [42] Austin T. Clements, M. Frans Kaashoek, Nickolai Zeldovich, Robert T. Mor- ris, and Eddie Kohler. The Scalable Commutativity Rule: Designing Scalable Software for Multicore Processors. In Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, SOSP ’13, 2013. [43] Patrick Colp, Mihir Nanavati, Jun Zhu, William Aiello, George Coker, Tim Deegan, Peter Loscocco, and Andrew Warfield. Breaking up is Hard to Do: Security and Functionality in a Commodity Hypervisor. In Proceedings of the Symposium on Operating Systems Principles, 2011. [44] Gilberto Contreras and Margaret Martonosi. Power Prediction for Intel XS- cale Processors Using Performance Monitoring Unit Events. In Proceedings of the 2005 International Symposium on Low Power Electronics and Design, 2005. [45] Benjamin Cox, David Evans, Adrian Filipi, Jonathan Rowanhill, Wei Hu, Jack Davidson, John Knight, Anh Nguyen-Tuong, and Jason Hiser. N-variant systems: A secretless framework for security through diversity. In Proceed- ings of the 15th USENIX Security Symposium, 2006. [46] Lorenzo Cavallaro Cristiano Giuffrida and Andrew S. Tanenbaum. We Crashed, Now What? In Proceedings of the 6th International Workshop on Hot Topics in System Dependability, 2010. [47] Francis M. David, Ellick M. Chan, Jeffrey C. Carlyle, and Roy H. Campbell. CuriOS: Improving Reliability Through Operating System Structure. In Pro- ceedings of the 8th USENIX Conference on Operating Systems Design and Implementation, 2008. [48] Tudor David, Rachid Guerraoui, and Vasileios Trigonakis. Everything you always wanted to know about synchronization but were afraid to ask. In Pro- ceedings of the Twenty-Fourth ACM Symposium on Operating Systems Prin- 137 ciples, 2013. [49] Tudor David, Rachid Guerraoui, and Vasileios Trigonakis. Everything You Always Wanted to Know About Synchronization but Were Afraid to Ask. In Proceedings of the Symposium on Operating Systems Principles, 2013. [50] Willem de Bruijn, Herbert Bos, and Henri Bal. Application-Tailored
Recommended publications
  • Trusted Docker Containers and Trusted Vms in Openstack
    Trusted Docker Containers and Trusted VMs in OpenStack Raghu Yeluri Abhishek Gupta Outline o Context: Docker Security – Top Customer Asks o Intel’s Focus: Trusted Docker Containers o Who Verifies Trust ? o Reference Architecture with OpenStack o Demo o Availability o Call to Action Docker Overview in a Slide.. Docker Hub Lightweight, open source engine for creating, deploying containers Provides work flow for running, building and containerizing apps. Separates apps from where they run.; Enables Micro-services; scale by composition. Underlying building blocks: Linux kernel's namespaces (isolation) + cgroups (resource control) + .. Components of Docker Docker Engine – Runtime for running, building Docker containers. Docker Repositories(Hub) - SaaS service for sharing/managing images Docker Images (layers) Images hold Apps. Shareable snapshot of software. Container is a running instance of image. Orchestration: OpenStack, Docker Swarm, Kubernetes, Mesos, Fleet, Project Docker Layers Atomic, Lattice… Docker Security – 5 key Customer Asks 1. How do you know that the Docker Host Integrity is there? o Do you trust the Docker daemon? o Do you trust the Docker host has booted with Integrity? 2. How do you verify Docker Container Integrity o Who wrote the Docker image? Do you trust the image? Did the right Image get launched? 3. Runtime Protection of Docker Engine & Enhanced Isolation o How can Intel help with runtime Integrity? 4. Enterprise Security Features – Compliance, Manageability, Identity authentication.. Etc. 5. OpenStack as a single Control Plane for Trusted VMs and Trusted Docker Containers.. Intel’s Focus: Enable Hardware-based Integrity Assurance for Docker Containers – Trusted Docker Containers Trusted Docker Containers – 3 focus areas o Launch Integrity of Docker Host o Runtime Integrity of Docker Host o Integrity of Docker Images Today’s Focus: Integrity of Docker Host, and how to use it in OpenStack.
    [Show full text]
  • Providing User Security Guarantees in Public Infrastructure Clouds
    1 Providing User Security Guarantees in Public Infrastructure Clouds Nicolae Paladi, Christian Gehrmann, and Antonis Michalas Abstract—The infrastructure cloud (IaaS) service model offers improved resource flexibility and availability, where tenants – insulated from the minutiae of hardware maintenance – rent computing resources to deploy and operate complex systems. Large-scale services running on IaaS platforms demonstrate the viability of this model; nevertheless, many organizations operating on sensitive data avoid migrating operations to IaaS platforms due to security concerns. In this paper, we describe a framework for data and operation security in IaaS, consisting of protocols for a trusted launch of virtual machines and domain-based storage protection. We continue with an extensive theoretical analysis with proofs about protocol resistance against attacks in the defined threat model. The protocols allow trust to be established by remotely attesting host platform configuration prior to launching guest virtual machines and ensure confidentiality of data in remote storage, with encryption keys maintained outside of the IaaS domain. Presented experimental results demonstrate the validity and efficiency of the proposed protocols. The framework prototype was implemented on a test bed operating a public electronic health record system, showing that the proposed protocols can be integrated into existing cloud environments. Index Terms—Security; Cloud Computing; Storage Protection; Trusted Computing F 1 INTRODUCTION host level. While support data encryption at rest is offered by several cloud providers and can be configured by tenants Cloud computing has progressed from a bold vision to mas- in their VM instances, functionality and migration capabil- sive deployments in various application domains. However, ities of such solutions are severely restricted.
    [Show full text]
  • Quantifying Security Impact of Operating-System Design
    Copyright Notice School of Computer Science & Engineering COMP9242 Advanced Operating Systems These slides are distributed under the Creative Commons Attribution 3.0 License • You are free: • to share—to copy, distribute and transmit the work • to remix—to adapt the work • under the following conditions: 2019 T2 Week 09b • Attribution: You must attribute the work (but not in any way that Local OS Research suggests that the author endorses you or your use of the work) as @GernotHeiser follows: “Courtesy of Gernot Heiser, UNSW Sydney” The complete license text can be found at http://creativecommons.org/licenses/by/3.0/legalcode 1 COMP9242 2019T2 W09b: Local OS Research © Gernot Heiser 2019 – CC Attribution License Quantifying OS-Design Security Impact Approach: • Examine all critical Linux CVEs (vulnerabilities & exploits database) • easy to exploit 115 critical • high impact Linux CVEs Quantifying Security Impact of • no defence available to Nov’17 • confirmed Operating-System Design • For each establish how microkernel-based design would change impact 2 COMP9242 2019T2 W09b: Local OS Research © Gernot Heiser 2019 – CC Attribution License 3 COMP9242 2019T2 W09b: Local OS Research © Gernot Heiser 2019 – CC Attribution License Hypothetical seL4-based OS Hypothetical Security-Critical App Functionality OS structured in isolated components, minimal comparable Trusted inter-component dependencies, least privilege to Linux Application computing App requires: base • IP networking Operating system Operating system • File storage xyz xyz • Display
    [Show full text]
  • Mysql NDB Cluster 7.5.16 (And Later)
    Licensing Information User Manual MySQL NDB Cluster 7.5.16 (and later) Table of Contents Licensing Information .......................................................................................................................... 2 Licenses for Third-Party Components .................................................................................................. 3 ANTLR 3 .................................................................................................................................... 3 argparse .................................................................................................................................... 4 AWS SDK for C++ ..................................................................................................................... 5 Boost Library ............................................................................................................................ 10 Corosync .................................................................................................................................. 11 Cyrus SASL ............................................................................................................................. 11 dtoa.c ....................................................................................................................................... 12 Editline Library (libedit) ............................................................................................................. 12 Facebook Fast Checksum Patch ..............................................................................................
    [Show full text]
  • The OKL4 Microvisor: Convergence Point of Microkernels and Hypervisors
    The OKL4 Microvisor: Convergence Point of Microkernels and Hypervisors Gernot Heiser, Ben Leslie Open Kernel Labs and NICTA and UNSW Sydney, Australia ok-labs.com ©2010 Open Kernel Labs and NICTA. All rights reserved. Microkernels vs Hypervisors > Hypervisors = “microkernels done right?” [Hand et al, HotOS ‘05] • Talks about “liability inversion”, “IPC irrelevance” … > What’s the difference anyway? ok-labs.com ©2010 Open Kernel Labs and NICTA. All rights reserved. 2 What are Hypervisors? > Hypervisor = “virtual machine monitor” • Designed to multiplex multiple virtual machines on single physical machine VM1 VM2 Apps Apps AppsApps AppsApps OS OS Hypervisor > Invented in ‘60s to time-share with single-user OSes > Re-discovered in ‘00s to work around broken OS resource management ok-labs.com ©2010 Open Kernel Labs and NICTA. All rights reserved. 3 What are Microkernels? > Designed to minimise kernel code • Remove policy, services, retain mechanisms • Run OS services in user-mode • Software-engineering and dependability reasons • L4: ≈ 10 kLOC, Xen ≈ 100 kLOC, Linux: ≈ 10,000 kLOC ServersServers ServersServers Apps Servers Device AppsApps Drivers Microkernel > IPC performance critical (highly optimised) • Achieved by API simplicity, cache-friendly implementation > Invented 1970 [Brinch Hansen], popularised late ‘80s (Mach, Chorus) ok-labs.com ©2010 Open Kernel Labs and NICTA. All rights reserved. 4 What’s the Difference? > Both contain all code executing at highest privilege level • Although hypervisor may contain user-mode code as well > Both need to abstract hardware resources • Hypervisor: abstraction closely models hardware • Microkernel: abstraction designed to support wide range of systems > What must be abstracted? • Memory • CPU • I/O • Communication ok-labs.com ©2010 Open Kernel Labs and NICTA.
    [Show full text]
  • NOVA: a Log-Structured File System for Hybrid Volatile/Non
    NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu and Steven Swanson, University of California, San Diego https://www.usenix.org/conference/fast16/technical-sessions/presentation/xu This paper is included in the Proceedings of the 14th USENIX Conference on File and Storage Technologies (FAST ’16). February 22–25, 2016 • Santa Clara, CA, USA ISBN 978-1-931971-28-7 Open access to the Proceedings of the 14th USENIX Conference on File and Storage Technologies is sponsored by USENIX NOVA: A Log-structured File System for Hybrid Volatile/Non-volatile Main Memories Jian Xu Steven Swanson University of California, San Diego Abstract Hybrid DRAM/NVMM storage systems present a host of opportunities and challenges for system designers. These sys- Fast non-volatile memories (NVMs) will soon appear on tems need to minimize software overhead if they are to fully the processor memory bus alongside DRAM. The result- exploit NVMM’s high performance and efficiently support ing hybrid memory systems will provide software with sub- more flexible access patterns, and at the same time they must microsecond, high-bandwidth access to persistent data, but provide the strong consistency guarantees that applications managing, accessing, and maintaining consistency for data require and respect the limitations of emerging memories stored in NVM raises a host of challenges. Existing file sys- (e.g., limited program cycles). tems built for spinning or solid-state disks introduce software Conventional file systems are not suitable for hybrid mem- overheads that would obscure the performance that NVMs ory systems because they are built for the performance char- should provide, but proposed file systems for NVMs either in- acteristics of disks (spinning or solid state) and rely on disks’ cur similar overheads or fail to provide the strong consistency consistency guarantees (e.g., that sector updates are atomic) guarantees that applications require.
    [Show full text]
  • Firecracker: Lightweight Virtualization for Serverless Applications
    Firecracker: Lightweight Virtualization for Serverless Applications Alexandru Agache, Marc Brooker, Andreea Florescu, Alexandra Iordache, Anthony Liguori, Rolf Neugebauer, Phil Piwonka, and Diana-Maria Popa, Amazon Web Services https://www.usenix.org/conference/nsdi20/presentation/agache This paper is included in the Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’20) February 25–27, 2020 • Santa Clara, CA, USA 978-1-939133-13-7 Open access to the Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’20) is sponsored by Firecracker: Lightweight Virtualization for Serverless Applications Alexandru Agache Marc Brooker Andreea Florescu Amazon Web Services Amazon Web Services Amazon Web Services Alexandra Iordache Anthony Liguori Rolf Neugebauer Amazon Web Services Amazon Web Services Amazon Web Services Phil Piwonka Diana-Maria Popa Amazon Web Services Amazon Web Services Abstract vantage over traditional server provisioning processes: mul- titenancy allows servers to be shared across a large num- Serverless containers and functions are widely used for de- ber of workloads, and the ability to provision new func- ploying and managing software in the cloud. Their popularity tions and containers in milliseconds allows capacity to be is due to reduced cost of operations, improved utilization of switched between workloads quickly as demand changes. hardware, and faster scaling than traditional deployment meth- Serverless is also attracting the attention of the research com- ods. The economics and scale of serverless applications de- munity [21,26,27,44,47], including work on scaling out video mand that workloads from multiple customers run on the same encoding [13], linear algebra [20, 53] and parallel compila- hardware with minimal overhead, while preserving strong se- tion [12].
    [Show full text]
  • Efficient Parallel I/O on Multi-Core Architectures
    Lecture series title/ lecture title Efficient parallel I/O on multi-core architectures Adrien Devresse CERN IT-SDC-ID Thematic CERN School of Computing 2014 1 Author(s) names – Affiliation Lecture series title/ lecture title How to make I/O bound application scale with multi-core ? What is an IO bound application ? → A server application → A job that accesses big number of files → An application that uses intensively network 2 Author(s) names – Affiliation Lecture series title/ lecture title Stupid example: Simple server monothreaded // create socket socket_desc = socket(AF_INET , SOCK_STREAM , 0); // bind the socket bind(socket_desc,(struct sockaddr *)&server , sizeof(server)); listen(socket_desc , 100); //accept connection from an incoming client while(1){ // declarations client_sock = accept(socket_desc, (struct sockaddr *)&client, &c); //Receive a message from client while( (read_size = recv(client_sock , client_message , 2000 , 0)) > 0{ // Wonderful, we have a client, do some useful work std::string msg("hello bob"); write(client_sock, msg.c_str(), msg.size()); } } 3 Author(s) names – Affiliation Lecture series title/ lecture title Stupid example: Let's make it parallel ! int main(int argc, char** argv){ // creat socket void do_work(int socket){ socket_desc = socket(AF_INET , SOCK_STREAM , 0); //Receive a message while( (read_size = // bind the socket recv(client_sock , bind(socket_desc, server , sizeof(server)); client_message , 2000 , 0)) > 0{ listen(socket_desc , 100); // Wonderful, we have a client // useful works //accept connection
    [Show full text]
  • Message Passing and Network Programming
    Message Passing and Network Programming Advanced Operating Systems Lecture 13 Colin Perkins | https://csperkins.org/ | Copyright © 2017 | This work is licensed under the Creative Commons Attribution-NoDerivatives 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Lecture Outline • Actors, sockets, and network protocols • Asynchronous I/O frameworks • Higher level abstractions Colin Perkins | https://csperkins.org/ | Copyright © 2017 2 Message Passing and Network Protocols • Recap: • Actor-based framework for message passing Send to • Each actor has a receive loop other actors Mailbox Actor Calls to one function per state Queue • Receive Message • Messages delivered by runtime system; Receiver processed sequentially Message Done Message Process • Actor can send messages in reply; Message Dispatcher return identity of next state Dequeue • Can we write network code this way? Request next • Send data by sending a message to an actor representing a socket • Receive messages representing data received on a socket Colin Perkins | https://csperkins.org/ | Copyright © 2017 3 Integrating Actors and Sockets Sending Thread Send to other actors Encoder Network Socket Mailbox Actor Queue Parser Receive Message Receiver Message Done Receiving Thread Message Process Message Dispatcher • Conceptually straightforward to integrate Dequeue actors with network code Request next • Runtime system maintains sending and
    [Show full text]
  • Linux Performance Tools
    Linux Performance Tools Brendan Gregg Senior Performance Architect Performance Engineering Team [email protected] @brendangregg This Tutorial • A tour of many Linux performance tools – To show you what can be done – With guidance for how to do it • This includes objectives, discussion, live demos – See the video of this tutorial Observability Benchmarking Tuning Stac Tuning • Massive AWS EC2 Linux cloud – 10s of thousands of cloud instances • FreeBSD for content delivery – ~33% of US Internet traffic at night • Over 50M subscribers – Recently launched in ANZ • Use Linux server tools as needed – After cloud monitoring (Atlas, etc.) and instance monitoring (Vector) tools Agenda • Methodologies • Tools • Tool Types: – Observability – Benchmarking – Tuning – Static • Profiling • Tracing Methodologies Methodologies • Objectives: – Recognize the Streetlight Anti-Method – Perform the Workload Characterization Method – Perform the USE Method – Learn how to start with the questions, before using tools – Be aware of other methodologies My system is slow… DEMO & DISCUSSION Methodologies • There are dozens of performance tools for Linux – Packages: sysstat, procps, coreutils, … – Commercial products • Methodologies can provide guidance for choosing and using tools effectively • A starting point, a process, and an ending point An#-Methodologies • The lack of a deliberate methodology… Street Light An<-Method 1. Pick observability tools that are: – Familiar – Found on the Internet – Found at random 2. Run tools 3. Look for obvious issues Drunk Man An<-Method • Tune things at random until the problem goes away Blame Someone Else An<-Method 1. Find a system or environment component you are not responsible for 2. Hypothesize that the issue is with that component 3. Redirect the issue to the responsible team 4.
    [Show full text]
  • Microkernel Vs
    1 VIRTUALIZATION: IBM VM/370 AND XEN CS6410 Hakim Weatherspoon IBM VM/370 Robert Jay Creasy (1939-2005) Project leader of the first full virtualization hypervisor: IBM CP-40, a core component in the VM system The first VM system: VM/370 Virtual Machine: Origin 3 IBM CP/CMS CP-40 CP-67 VM/370 Why Virtualize 4 Underutilized machines Easier to debug and monitor OS Portability Isolation The cloud (e.g. Amazon EC2, Google Compute Engine, Microsoft Azure) IBM VM/370 Specialized Conversation Mainstream VM al Monitor OS (MVS, Another Virtual subsystem System DOS/VSE copy of VM machines (RSCS, RACF, (CMS) etc.) GCS) Hypervisor Control Program (CP) Hardware System/370 IBM VM/370 Technology: trap-and-emulate Problem Application Privileged Kernel Trap Emulate CP Classic Virtual Machine Monitor (VMM) 7 Virtualization: rejuvenation 1960’s: first track of virtualization Time and resource sharing on expensive mainframes IBM VM/370 Late 1970’s and early 1980’s: became unpopular Cheap hardware and multiprocessing OS Late 1990’s: became popular again Wide variety of OS and hardware configurations VMWare Since 2000: hot and important Cloud computing Docker containers Full Virtualization 9 Complete simulation of underlying hardware Unmodified guest OS Trap and simulate privileged instruction Was not supported by x86 (Not true anymore, Intel VT-x) Guest OS can’t see real resources Paravirtualization 10 Similar but not identical to hardware Modifications to guest OS Hypercall Guest OS registers handlers Improved performance VMware ESX Server 11 Full virtualization Dynamically rewrite privileged instructions Ballooning Content-based page sharing Denali 12 Paravirtualization 1000s of VMs Security & performance isolation Did not support mainstream OSes VM uses single-user single address space 13 Xen and the Art of Virtualization Xen 14 University of Cambridge, MS Research Cambridge XenSource, Inc.
    [Show full text]
  • Why Open Source Software?
    Why Open Source Software / Free Software (OSS/FS)? Look at the Numbers! David A. Wheeler http://www.dwheeler.com/contactme.html Revised as of November 7, 2004 This paper provides quantitative data that, in many cases, using open source software / free software is a reasonable or even superior approach to using their proprietary competition according to various measures. This paper’s goal is to show that you should consider using OSS/FS when acquiring software. This paper examines market share, reliability, performance, scalability, security, and total cost of ownership. It also has sections on non- quantitative issues, unnecessary fears, OSS/FS on the desktop, usage reports, governments and OSS/FS, other sites providing related information, and ends with some conclusions. An appendix gives more background information about OSS/FS. You can view this paper at http://www.dwheeler.com/oss_fs_why.html (HTML format). Palm PDA users may wish to use Plucker to view this. A short briefing based on this paper is also available in PDF and Open Office Impress formats (for the latter, use Open Office Impress). Old archived copies and a list of changes are also available. 1. Introduction Open Source Software / Free Software (OSS/FS) has risen to great prominence. Briefly, OSS/FS programs are programs whose licenses give users the freedom to run the program for any purpose, to study and modify the program, and to redistribute copies of either the original or modified program (without having to pay royalties to previous developers). This goal of this paper is to show that you should consider using OSS/FS when you’re looking for software, based on quantitative measures.
    [Show full text]