Kata Containers and Gvisor a Quantitative Comparison
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
An Event-Driven Embedded Operating System for Miniature Robots
This is a repository copy of OpenSwarm: an event-driven embedded operating system for miniature robots. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/110437/ Version: Accepted Version Proceedings Paper: Trenkwalder, S.M., Lopes, Y.K., Kolling, A. et al. (3 more authors) (2016) OpenSwarm: an event-driven embedded operating system for miniature robots. In: Proceedings of IROS 2016. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, 09-14 Oct 2016, Daejeon, Korea. IEEE . ISBN 978-1-5090-3762-9 https://doi.org/10.1109/IROS.2016.7759660 © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. Reuse Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. -
Benchmarking, Analysis, and Optimization of Serverless Function Snapshots
Benchmarking, Analysis, and Optimization of Serverless Function Snapshots Dmitrii Ustiugov∗ Plamen Petrov Marios Kogias† University of Edinburgh University of Edinburgh Microsoft Research United Kingdom United Kingdom United Kingdom Edouard Bugnion Boris Grot EPFL University of Edinburgh Switzerland United Kingdom ABSTRACT CCS CONCEPTS Serverless computing has seen rapid adoption due to its high scala- • Computer systems organization ! Cloud computing; • In- bility and flexible, pay-as-you-go billing model. In serverless, de- formation systems ! Computing platforms; Data centers; • velopers structure their services as a collection of functions, spo- Software and its engineering ! n-tier architectures. radically invoked by various events like clicks. High inter-arrival time variability of function invocations motivates the providers KEYWORDS to start new function instances upon each invocation, leading to cloud computing, datacenters, serverless, virtualization, snapshots significant cold-start delays that degrade user experience. To reduce ACM Reference Format: cold-start latency, the industry has turned to snapshotting, whereby Dmitrii Ustiugov, Plamen Petrov, Marios Kogias, Edouard Bugnion, and Boris an image of a fully-booted function is stored on disk, enabling a Grot. 2021. Benchmarking, Analysis, and Optimization of Serverless Func- faster invocation compared to booting a function from scratch. tion Snapshots . In Proceedings of the 26th ACM International Conference on This work introduces vHive, an open-source framework for Architectural Support for Programming Languages and Operating Systems serverless experimentation with the goal of enabling researchers (ASPLOS ’21), April 19–23, 2021, Virtual, USA. ACM, New York, NY, USA, to study and innovate across the entire serverless stack. Using 14 pages. https://doi.org/10.1145/3445814.3446714 vHive, we characterize a state-of-the-art snapshot-based serverless infrastructure, based on industry-leading Containerd orchestra- 1 INTRODUCTION tion framework and Firecracker hypervisor technologies. -
What Are the Problems with Embedded Linux?
What Are the Problems with Embedded Linux? Every Operating System Has Benefits and Drawbacks Linux is ubiquitous. It runs most internet servers, is inside Android* smartphones, and is used on millions of embedded systems that, in the past, ran Real-Time Operating Systems (RTOSes). Linux can (and should) be used were possible for embedded projects, but while it gives you extreme choice, it also presents the risk of extreme complexity. What, then, are the trade-offs between embedded Linux and an RTOS? In this article, we cover some key considerations when evaluating Linux for a new development: ■ Design your system architecture first ■ What is Linux? ■ Linux vs. RTOSes ■ Free software is about liberty—not price ■ How much does Embedded Linux cost? ■ Why pay for Embedded Linux? ■ Should you buy Embedded Linux or roll-your-own (RYO)? ■ To fork or not to fork? ■ Software patching ■ Open source licensing ■ Making an informed decision The important thing is not whether Linux or an RTOS is “the best,” but whether either operating system—or both together—makes the most technical and financial sense for your project. We hope this article helps you make an informed decision. Design Your System Architecture First It is important to design your system architecture first—before choosing either Linux or an RTOS—because both choices can limit architectural freedom. You may discover that aspects of your design require neither Linux nor an RTOS, making your design a strong candidate for a heterogeneous approach that includes one or more bare-metal environments (with possibly a Linux and/or RTOS environment as well). -
Co-Optimizing Performance and Memory Footprint Via Integrated CPU/GPU Memory Management, an Implementation on Autonomous Driving Platform
2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS) Co-Optimizing Performance and Memory Footprint Via Integrated CPU/GPU Memory Management, an Implementation on Autonomous Driving Platform Soroush Bateni*, Zhendong Wang*, Yuankun Zhu, Yang Hu, and Cong Liu The University of Texas at Dallas Abstract—Cutting-edge embedded system applications, such as launches the OpenVINO toolkit for the edge-based deep self-driving cars and unmanned drone software, are reliant on learning inference on its integrated HD GPUs [4]. integrated CPU/GPU platforms for their DNNs-driven workload, Despite the advantages in SWaP features presented by the such as perception and other highly parallel components. In this work, we set out to explore the hidden performance im- integrated CPU/GPU architecture, our community still lacks plication of GPU memory management methods of integrated an in-depth understanding of the architectural and system CPU/GPU architecture. Through a series of experiments on behaviors of integrated GPU when emerging autonomous and micro-benchmarks and real-world workloads, we find that the edge intelligence workloads are executed, particularly in multi- performance under different memory management methods may tasking fashion. Specifically, in this paper we set out to explore vary according to application characteristics. Based on this observation, we develop a performance model that can predict the performance implications exposed by various GPU memory system overhead for each memory management method based management (MM) methods of the integrated CPU/GPU on application characteristics. Guided by the performance model, architecture. The reason we focus on the performance impacts we further propose a runtime scheduler. -
Performance Study of Real-Time Operating Systems for Internet Of
IET Software Research Article ISSN 1751-8806 Performance study of real-time operating Received on 11th April 2017 Revised 13th December 2017 systems for internet of things devices Accepted on 13th January 2018 E-First on 16th February 2018 doi: 10.1049/iet-sen.2017.0048 www.ietdl.org Rafael Raymundo Belleza1 , Edison Pignaton de Freitas1 1Institute of Informatics, Federal University of Rio Grande do Sul, Av. Bento Gonçalves, 9500, CP 15064, Porto Alegre CEP: 91501-970, Brazil E-mail: [email protected] Abstract: The development of constrained devices for the internet of things (IoT) presents lots of challenges to software developers who build applications on top of these devices. Many applications in this domain have severe non-functional requirements related to timing properties, which are important concerns that have to be handled. By using real-time operating systems (RTOSs), developers have greater productivity, as they provide native support for real-time properties handling. Some of the key points in the software development for IoT in these constrained devices, like task synchronisation and network communications, are already solved by this provided real-time support. However, different RTOSs offer different degrees of support to the different demanded real-time properties. Observing this aspect, this study presents a set of benchmark tests on the selected open source and proprietary RTOSs focused on the IoT. The benchmark results show that there is no clear winner, as each RTOS performs well at least on some criteria, but general conclusions can be drawn on the suitability of each of them according to their performance evaluation in the obtained results. -
Qualifikationsprofil #10309
QUALIFIKATIONSPROFIL #10309 ALLGEMEINE DATEN Geburtsjahr: 1972 Ausbildung: Abitur Diplom, Informatik, (TU, Kaiserslautern) Fremdsprachen: Englisch, Französisch Spezialgebiete: Kubernetes KENNTNISSE Tools Active Directory Apache Application Case CATIA CVS Eclipse Exchange Framework GUI Innovator ITIL J2EE JMS LDAP Lotus Notes make MS Exchange MS Outlook MS-Exchange MS-Office MS-Visual Studio NetBeans OSGI RACF SAS sendmail Together Turbine UML VMWare .NET ADS ANT ASP ASP.NET Flash GEnie IDES Image Intellij IDEA IPC Jackson JBOSS Lex MS-Visio ODBC Oracle Application Server OWL PGP SPSS SQS TesserAct Tivoli Toolbook Total Transform Visio Weblogic WebSphere YACC Tätigkeiten Administration Analyse Beratung Design Dokumentation KI Konzeption Optimierung Support Vertrieb Sprachen Ajax Basic C C# C++ Cobol Delphi ETL Fortran Java JavaScript Natural Perl PHP PL/I PL-SQL Python SAL Smalltalk SQL ABAP Atlas Clips Delta FOCUS HTML Nomad Pascal SPL Spring TAL XML Detaillierte Komponenten AM BI FS-BA MDM PDM PM BW CO FI LO PP Datenbanken Approach IBM Microsoft Object Store Oracle Progress Sybase DMS ISAM JDBC mySQL DC/Netzwerke ATM DDS Gateway HBCI Hub Internet Intranet OpenSSL SSL VPN Asynchronous CISCO Router DNS DSL Firewall Gateways HTTP RFC Router Samba Sockets Switches Finance Business Intelligence Excel Konsolidierung Management Projektleiter Reporting Testing Wertpapiere Einkauf CAD Systeme CATIA V5 sonstige Hardware Digital HP PC Scanner Siemens Spark Teradata Bus FileNet NeXT SUN Switching Tools, Methoden Docker Go Kubernetes Rational RUP -
Gvisor Is a Project to Restrict the Number of Syscalls That the Kernel and User Space Need to Communicate
SED 820 Transcript EPISODE 820 [INTRODUCTION] [0:00:00.3] JM: The Linux operating system includes user space and kernel space. In user space, the user can create and interact with a variety of applications directly. In kernel space, the Linux kernel provides a stable environment in which device drivers interact with hardware and manage low-level resources. A Linux container is a virtualized environment that runs within user space. To perform an operation, a process in a container in user space makes a syscall, which is a system call into kernel space. This allows the container to have access to resources like memory and disk. Kernel space must be kept secure to ensure the operating system’s integrity. Linux includes hundreds of syscalls. Each syscall represents an interface between the user space and the kernel space. Security vulnerabilities can emerge from this wide attack surface of different syscalls and most applications only need a small number of syscalls to provide their required functionality. gVisor is a project to restrict the number of syscalls that the kernel and user space need to communicate. gVisor is a runtime layer between the user space container and the kernel space. gVisor reduces the number of syscalls that can be made into kernel space. The security properties of gVisor make it an exciting project today, but it is the portability features of gVisor that hint at a huge future opportunity. By inserting an interpreter interface between containers and the Linux kernel, gVisor presents the container world with an opportunity to run on operating systems other than Linux. -
Our Tas Top 11 Technologies of the Decade Tinyos and Nesc Hardware Evolu On
Our TAs Top 11 Technologies of the Decade Mo Sha Yong Fu 1.! Smartphones 7.! Drone Aircra • [email protected]" •! [email protected]" •! TinyOS tutorial." •! Grade critiques." 2.! Social Networking 8.! Planetary Rovers •! Help students with projects." •! Office Hour: by appointment." 3.! Voice over IP 9.! Flexible AC •! Manage motes." •! Bryan 502D Transmission •! Grade projects." 4.! LED Lighng •! Office Hour: Tue/Thu 5:30-6." 5.! Mul@core CPUs 10.!Digital Photography •! Bryan 502A 6.! Cloud Compung 11.!Class-D Audio Chenyang Lu 1 Chenyang Lu 2 TinyOS and nesC Hardware Evoluon ! TinyOS: OS for wireless sensor networks. ! Miniature devices manufactured economically ! nesC: programming language for TinyOS. ! Microprocessors ! Sensors/actuators ! Wireless chips 4.5’’X2.4’’ 1’’X1’’ 1 mm2 1 nm2 Chenyang Lu 4 Chenyang Lu 3 Mica2 Mote Hardware Constraints ! Processor ! Microcontroller: 7.4 MHz, 8 bit Severe constraints on power, size, and cost ! Memory: 4KB data, 128 KB program ! slow microprocessor ! Radio ! low-bandwidth radio ! Max 38.4 Kbps ! limited memory ! Sensors ! limited hardware parallelism CPU hit by many interrupts! ! Light, temperature, acceleraon, acous@c, magne@c… ! manage sleep modes in hardware components ! Power ! <1 week on two AA baeries in ac@ve mode ! >1 year baery life on sleep modes! Chenyang Lu 5 Chenyang Lu 6 Soware Challenges Tradional OS ! Small memory footprint ! Mul@-threaded ! Efficiency - power and processing ! Preemp@ve scheduling ! Concurrency-intensive operaons ! Threads: ! Diversity in applicaons & plaorm efficient modularity ! ready to run; ! Support reconfigurable hardware and soJware executing ! execu@ng on the CPU; ! wai@ng for data. gets CPU preempted needs data gets data ready waiting needs data Chenyang Lu 7 Chenyang Lu 8 Pros and Cons of Tradional OS Example: Preempve Priority Scheduling ! Each process has a fixed priority (1 highest); ! Mul@-threaded + preemp@ve scheduling ! P1: priority 1; P2: priority 2; P3: priority 3. -
Surviving Software Dependencies
practice DOI:10.1145/3347446 is additional code a programmer wants Article development led by queue.acm.org to call. Adding a dependency avoids repeating work: designing, testing, de- bugging, and maintaining a specific Software reuse is finally here unit of code. In this article, that unit of but comes with risks. code is referred to as a package; some systems use the terms library and mod- BY RUSS COX ule instead. Taking on externally written depen- dencies is not new. Most programmers have at one point in their careers had to go through the steps of manually Surviving installing a required library, such as C’s PCRE or zlib; C++’s Boost or Qt; or Java’s JodaTime or JUnit. These pack- ages contain high-quality, debugged Software code that required significant exper- tise to develop. For a program that needs the functionality provided by one of these packages, the tedious work of manually downloading, in- Dependencies stalling, and updating the package is easier than the work of redeveloping that functionality from scratch. The high fixed costs of reuse, however, mean manually reused packages tend to be big; a tiny package would be easier to reimplement. FOR DECADES, DISCUSSION of software reuse was more A dependency manager (a.k.a. pack- common than actual software reuse. Today, the situation age manager) automates the download- ing and installation of dependency is reversed: developers reuse software written by others packages. As dependency managers every day, in the form of software dependencies, and the make individual packages easier to download and install, the lower fixed situation goes mostly unexamined. -
Firecracker: Lightweight Virtualization for Serverless Applications
Firecracker: Lightweight Virtualization for Serverless Applications Alexandru Agache, Marc Brooker, Andreea Florescu, Alexandra Iordache, Anthony Liguori, Rolf Neugebauer, Phil Piwonka, and Diana-Maria Popa, Amazon Web Services https://www.usenix.org/conference/nsdi20/presentation/agache This paper is included in the Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’20) February 25–27, 2020 • Santa Clara, CA, USA 978-1-939133-13-7 Open access to the Proceedings of the 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI ’20) is sponsored by Firecracker: Lightweight Virtualization for Serverless Applications Alexandru Agache Marc Brooker Andreea Florescu Amazon Web Services Amazon Web Services Amazon Web Services Alexandra Iordache Anthony Liguori Rolf Neugebauer Amazon Web Services Amazon Web Services Amazon Web Services Phil Piwonka Diana-Maria Popa Amazon Web Services Amazon Web Services Abstract vantage over traditional server provisioning processes: mul- titenancy allows servers to be shared across a large num- Serverless containers and functions are widely used for de- ber of workloads, and the ability to provision new func- ploying and managing software in the cloud. Their popularity tions and containers in milliseconds allows capacity to be is due to reduced cost of operations, improved utilization of switched between workloads quickly as demand changes. hardware, and faster scaling than traditional deployment meth- Serverless is also attracting the attention of the research com- ods. The economics and scale of serverless applications de- munity [21,26,27,44,47], including work on scaling out video mand that workloads from multiple customers run on the same encoding [13], linear algebra [20, 53] and parallel compila- hardware with minimal overhead, while preserving strong se- tion [12]. -
Building for I.MX Mature Boards
NXP Semiconductors Document Number: AN12024 Application Notes Rev. 0 , 07/2017 Building for i.MX Mature Boards 1. Introduction Contents The software for mature i.MX boards is upstreamed into 1. Introduction ........................................................................ 1 the Linux Kernel and U-Boot communities. These 2. Mature SoC Software Status .............................................. 2 boards can use the current Linux Kernel and U-Boot 3. Further Assistance .............................................................. 2 4. Boards Supported by the Yocto Project Community ......... 2 community solution. NXP no longer provides code and 4.1. Build environment setup ......................................... 3 images for these boards directly. This document 4.2. Examples ................................................................ 3 describes how to build the current Linux kernel and U- Boot software using the community solution for mature boards. The i.MX Board Support Packages (BSP) Releases are only supported for one year after the release date, so developing mature SoC projects using community solutions provides more current software and longer life for software than using the static BSPs provided by NXP. Community software includes upstreamed patches that the i.MX development team has provided to the community after the BSPs were released. Peripherals on i.MX mature boards such as the VPU and GPU, which depend on proprietary software, might be limited to an earlier version of the i.MX BSP. Supported i.MX reference boards should use the latest released i.MX BSPs on i.MX Software and Tools. Community solutions are not formally validated for i.MX. © 2017 NXP B.V. Boards Supported by the Yocto Project Community 2. Mature SoC Software Status The following i.MX mature SoC boards are from the i.MX 2X, 3X, and 5X families. -
Architectural Implications of Function-As-A-Service Computing
Architectural Implications of Function-as-a-Service Computing Mohammad Shahrad Jonathan Balkind David Wentzlaff Princeton University Princeton University Princeton University Princeton, USA Princeton, USA Princeton, USA [email protected] [email protected] [email protected] ABSTRACT Network Serverless computing is a rapidly growing cloud application model, popularized by Amazon’s Lambda platform. Serverless cloud ser- Scheduling vices provide fine-grained provisioning of resources, which scale Platform (priorwork) automatically with user demand. Function-as-a-Service (FaaS) appli- Queueing Management cations follow this serverless model, with the developer providing 35% decrease in IPC Interference their application as a set of functions which are executed in response due to interference 6x variation due to to a user- or system-generated event. Functions are designed to Memory BW invocation pattern 20x MPKI for be short-lived and execute inside containers or virtual machines, Branch MPKI >10x exec time short functions introducing a range of system-level overheads. This paper studies for short functions Cold Start Server the architectural implications of this emerging paradigm. Using (500ms cold start) Up to 20x (thispaper) Container the commercial-grade Apache OpenWhisk FaaS platform on real slowdown servers, this work investigates and identifies the architectural im- Native plications of FaaS serverless computing. The workloads, along with Execution Figure 1: We characterize the server-level overheads of the way that FaaS inherently interleaves short functions from many Function-as-a-Service applications, compared to native exe- tenants frustrates many of the locality-preserving architectural cution. This contrasts with prior work [2–5] which focused structures common in modern processors.