Deker: Decomposing Commodity Kernels for Verification

Total Page:16

File Type:pdf, Size:1020Kb

Deker: Decomposing Commodity Kernels for Verification TWC: Small: Deker: Decomposing Commodity Kernels for Verification Overview. Despite numerous ways in how the use of computer systems has evolved over the last decades, the software engineering technology behind the very core of the systems stack—an operating system kernel— remains unchanged since early computer systems. Starting as a relatively simple software layer aimed at providing isolation and multiplexing of hardware, modern kernels combine dozens of complex subsystems and consist of tens of millions of lines of code. Today, OS kernels are part of many mission critical systems. We trust these systems not only to run correctly when faced with thousands of development commits and massive re-engineering efforts, but also to withstand targeted security attacks. Unfortunately, modern kernels are still developed with a legacy software engineering techniques - a combination of an unsafe pro- gramming language, primitive concurrency primitives, and virtually no testing or verification tools. Today these systems are faulty and vulnerable. Even worse, complexity and size of modern kernels as they exist today will likely keep them beyond the reach of testing, static analysis, and software verification tools for years to come. Proposed research. The PIs propose creating Deker, a framework for decomposing and verifying com- modity operating system kernels. Deker turns a de-facto standard commodity operating system kernel into a collection of strongly isolated subsystems suitable for verification. Despite multiple decades of evolution and improvements in software verification tools, almost none of them made their way into regular industry practice. Deker aims to amend this using a holistic approach unifying modular redesign of legacy compo- nents with customized verification techniques. While decomposing the kernel and providing complete iso- lation of subsystems, Deker remains practical: retains source-level compatibility with the non-decomposed kernel, enables incremental adoption, and remains fast. As the main glue connecting decomposition and verification efforts, a rigorous interface definition language (IDL) is proposed for specifying protocols that govern decomposed subsystems. Explicit protocol specification is enabling easier verification and mainte- nance, while the accompanying IDL compiler subsequently facilitates automatic generation of appropriate stubs that are correct by construction - thereby justifying manual programmer effort that goes into writing IDL descriptions. Intellectual merit. The first contribution of this work is a set of techniques, principles, and tools enabling decomposition of a fully-featured operating system kernel in a practical manner. Deker develops patterns of decomposition as a set of recipes for decomposing legacy components. Deker relies on a powerful IDL to generate the glue code enabling transparent function invocation and object synchronization across share- nothing subsystems. Deker’s IDL defines disciplines for synchronizing object hierarchies and invoking isolated subsystems. The second main contribution is a custom verification framework that builds on top of Deker’s decomposed environment. The framework seamlessly integrates with the rest of Deker through IDL descriptions of subsystem interfaces that are leveraged to extract environment models needed for mod- ular verification and properties ensuring correct behavior of subsystems. The verification framework em- beds tailored algorithms for efficient handling of decomposed subsystems. Broader impacts. PIs expect that the proposed work will provide a foundation for mitigating the vast economic damage that is enabled by the programming errors and security vulnerabilities in modern OS kernels. By decomposing and verifying an unmodified, commodity OS kernel, Deker builds a practical foundation for verifiable systems. Many kinds of software faults, security attacks, malware botnets, and related activities will be largely eliminated. Deker will be implemented as part of the de-facto research and industry standard Linux operating system, and will be open source, directly benefiting the broader community. Finally, critical for advancing science, diversity of research ideas is only possible through diversity of their creators. This work will help a traditionally underrepresented students in the security and verification communities. PIs expect a female MS student to be a lead research contributor on Deker’s language mechanisms. Contents 1 Introduction 1 1.1 Deker: Verification through Decomposition . .2 2 Threat Model 3 3 Background and Related Work 4 3.1 Vulnerabilities in OS Kernels . .4 3.2 Kernel Decomposition . .5 3.3 Kernel Verification . .5 4 Preliminary work 6 5 Detailed Research Plan 7 5.1 Task 1: Getting Decomposed Subsystems Up and Running . .7 5.2 Task 2: Running SMACK on Representative Subsystems . .8 5.3 Task 3: Decomposition Patterns . .8 5.4 Task 4: Language Support for Decomposition and Verification . 10 5.5 Task 5: Support for Efficient Decomposed Environments . 11 5.6 Task 6: Tailored Verification Algorithms . 12 6 Team 13 7 Timeline and Management Plan 13 8 Broader Impacts of the Proposed Work 14 9 Results from Prior NSF Support 15 1 Introduction An operating system (OS) kernel is the single most critical part of the systems stack. The OS kernel ensures isolation, security, and access control for multiple mistrusting workloads and users. In a modern system, an attacker is one kernel vulnerability away from gaining control over the entire machine. A successful kernel attack provides the ability to make the threat persistent in face of reboots, conceal it from the user and anti- virus security tools, establish a platform for compromising local applications, collect sensitive financial information and user credentials, mount attacks on the network hosts, and establish distributed, peer-to- peer command and control infrastructure. Modern kernels are notoriously complex. A typical kernel code routinely employs manual management of low-level concurrency primitives, handles millions of object allocations and deallocations per second, implements numerous security and access control checks, and adheres to multiple conventions describing allocation, locking, and synchronization of kernel data structures in nearly every kernel function. Despite the rapid evolution of computer systems over the last four decades, modern OS kernels rely on the software development technology that has not changed since early computer systems. Due to rapid development rate (the de-facto industry standard Linux kernel features over 50 thousand commits a year) and a huge codebase (the latest version of the Linux kernel contains over 12 million lines of C/C++ and assembly code in 20141), bugs and vulnerabilities are routinely introduced into the kernel code. In 2014, the Common Vul- nerabilities and Exposures database lists 129 Linux kernel vulnerabilities that allow for privilege escalation, denial-of-service, and other exploits. This number is consistent across several years [21, 84]. Being ubiquitous, modern OS kernels are primary targets for security attacks. They provide an indus- try standard execution environment for nearly every consumer and enterprise device: home entertainment systems, routers, embedded devices, mobile, laptop, tablet, and desktop computers, enterprise workspace, and data center infrastructure. Today, OS kernels are a de-facto part of many mission critical systems rang- ing from embedded medical devices [99, 146] to industrial control systems [96, 117]. Attackers routinely employ sophisticated vulnerability discovery tools like black box fuzzers [5,54,110] and vulnerability scan- ners [105, 108, 126]. Without support from verification tools, industry standard OS kernels make nearly every computer system on the planet vulnerable. Despite numerous advances in software verification, static analysis, and testing tools, software verifica- tion community has admittedly largely failed to address the needs of an average industry-grade OS kernel developer. Apart from static analysis tools that perform only very shallow code analysis for simple classes of bugs (e.g., Coverity [25]), it is telling that by and large none of the more precise and powerful verification tools made inroads into OS industry practice—for example, to the best of our knowledge, none are regu- larly used in the Linux kernel development process. While scalability of software verifiers improved orders of magnitude in the last decade (e.g., SAGE [54]), existing legacy monolithic kernels are still beyond their reach due to their great complexity in terms of size, number of components, their elaborate interactions, and hardware dependencies. There have been approaches targeting particular subsystems in isolation (e.g., de- vice drivers [6,10,67,85,112,147]), but those require manually writing extensive environment specifications in a formalism typically understood only by verification experts. Such specifications are completely dis- joint from the actual source code they model and are hard to maintain, which means they quickly fall out of sync as code is evolving and become largely obsolete. Multiple projects attempt to re-implement kernel functionality from scratch in a safer, verification friendly language [22,57,64,86,149]. Although promising, these approaches are still far from being applicable in a realistic deployment. Modern kernels accumulate several decades of development effort that result in irreplaceable functionality: hundreds of device drivers, dozens network protocols, block storage stacks,
Recommended publications
  • Tizen IVI “From Scratch” Customizing, Building and Testing
    Tizen IVI “from scratch” Customizing, building and testing Stéphane Desneux Senior Software Engineer Eurogiciel <[email protected]> Eurogiciel ● Open source development and integration: ● Maintainers in multiple domains on tizen.org ● Embedded systems for real-time multimedia: ▪ Widi/Miracast stack ▪ Wayland/Weston ▪ Webkit2 browser with HW acceleration ● Applications: HTML5/CSS3, jquery, jqmobi, Cordova ● Location : Vannes (Brittany), France 14 2 FOSDEM' Automotive devroom – Tizen “from scratch” : customize, build, test ! Agenda ● Tizen & Tizen:IVI : short introduction ● From source code to target devices ● Customize ● Build ● Flash, Run, Test ! 14 3 FOSDEM' Automotive devroom – Tizen “from scratch” : customize, build, test ! Tizen: a short introduction Definition ● Open source project ● Hosted at the Linux Foundation ● Innovative Web-based platform for multiple devices ● Sponsored by worldwide companies ● Samsung & Intel are two big contributors ● Built on industry standards: ● GNU/Linux kernel, GNU libc ● POSIX ● W3C ● Many upstream Open Source projects 14 5 FOSDEM' Automotive devroom – Tizen “from scratch” : customize, build, test ! Tizen Profiles ● Multiple vertical profiles (derived from Tizen:Generic) ● IVI ● Mobile ● Future: other devices (TV, ...) ● Each profile adds its own enhancements ● Tizen packaging format: RPM 14 6 FOSDEM' Automotive devroom – Tizen “from scratch” : customize, build, test ! From source code … … to target devices 1: Source code GIT Repositories Remote Local Clone source repo Developers
    [Show full text]
  • Long Comment Regarding a Proposed Exemption Under 17 U.S.C. 1201 for Software Freedom Conservancy Proposed Class: 20 – Smart T
    Long Comment Regarding a Proposed Exemption Under 17 U.S.C. 1201 For Software Freedom Conservancy Proposed Class: 20 – Smart TVs No multimedia evidence is being provided in connection with this comment Item 1. Commenter Information The Petition submitter is Software Freedom Conservancy (“Conservancy”), a 501(c)(3) not-for-profit organization that helps promote, improve, develop, and defend Free, Libre, and Open Source Software (“FLOSS”)—software developed by volunteer communities and licensed for the benefit of everyone. Conservancy is the nonprofit home for dozens of FLOSS projects representing well over a thousand volunteer contributors. Conservancy's communities maintain some of the most fundamental utilities in computing today, and introduce innovations that will shape how software will be created in the future. Among the projects for which Conservancy provides logistical, administrative, and legal support are BusyBox and Samba, both of which are commonly installed on “smart” or computer- embedded consumer electronics devices. BusyBox provides a number of key system utilities that enable such devices to run applications, interact with files, access network services, and more.1 It is also used by community projects focused on unlocking and improving Samsung-2 and LG- manufactured Smart TVs.3 Samba permits devices to interact with files stored on other networked devices.4 Conservancy also represents the interests of several contributors to the Linux kernel, the core component of the operating system of most Smart TVs. Conservancy may be contacted through its authorized representatives and pro bono counsel at Tor Ekeland, P.C., 195 Plymouth Street, Brooklyn, New York 11201: Aaron Williamson Frederic Jennings (718) 285-9349 (718) 514-2075 [email protected] [email protected] Item 2.
    [Show full text]
  • Tizen Based Remote Controller CAR Using Raspberry Pi2
    #ELC2016 Tizen based remote controller CAR using raspberry pi2 Pintu Kumar ([email protected], [email protected]) Samsung Research India – Bangalore : Tizen Kernel/BSP Team Embedded Linux Conference – 06th April/2016 1 CONTENT #ELC2016 • INTRODUCTION • RASPBERRY PI2 OVERVIEW • TIZEN OVERVIEW • HARDWARE & SOFTWARE REQUIREMENTS • SOFTWARE CUSTOMIZATION • SOFTWARE SETUP & INTERFACING • HARDWARE INTERFACING & CONNECTIONS • ROBOT CONTROL MECHANISM • SOME RESULTS • CONCLUSION • REFERENCES Embedded Linux Conference – 06th April/2016 2 INTRODUCTION #ELC2016 • This talk is about designing a remote controller robot (toy car) using the raspberry pi2 hardware, pi2 Linux Kernel and Tizen OS as platform. • In this presentation, first we will see how to replace and boot Tizen OS on Raspberry Pi using the pre-built Tizen images. Then we will see how to setup Bluetooth, Wi-Fi on Tizen and finally see how to control a robot remotely using Tizen smart phone application. Embedded Linux Conference – 06th April/2016 3 RASPBERRY PI2 - OVERVIEW #ELC2016 1 GB RAM Embedded Linux Conference – 06th April/2016 4 Raspberry PI2 Features #ELC2016 • Broadcom BCM2836 900MHz Quad Core ARM Cortex-A7 CPU • 1GB RAM • 4 USB ports • 40 GPIO pins • Full HDMI port • Ethernet port • Combined 3.5mm audio jack and composite video • Camera interface (CSI) • Display interface (DSI) • Micro SD card slot • Video Core IV 3D graphics core Embedded Linux Conference – 06th April/2016 5 PI2 GPIO Pins #ELC2016 Embedded Linux Conference – 06th April/2016 6 TIZEN OVERVIEW #ELC2016 Embedded Linux Conference – 06th April/2016 7 TIZEN Profiles #ELC2016 Mobile Wearable IVI TV TIZEN Camera PC/Tablet Printer Common Next?? • TIZEN is the OS of everything.
    [Show full text]
  • Hardening Linux Processes Extending Grsecurity to Integrate System Call Filters and Namespaces
    Universidad de Los Andes Tesis de Maestr´ıa Hardening Linux Processes Extending Grsecurity to Integrate System Call Filters and Namespaces David Derby Cardona Facultad de Ingenier´ıa Departamento de Ingenier´ıade Sistemas y Computaci´on June 2016 Universidad de Los Andes Tesis de Maestr´ıa Hardening Linux Processes Extending Grsecurity to Integrate System Call Filters and Namespaces David Derby Cardona Asesor: Sandra Rueda Rodr´ıguez Jurados: Rafael G´omezD´ıaz Fabian Molina Molina Facultad de Ingenier´ıa Departamento de Ingenier´ıade Sistemas y Computaci´on June 2016 Abstract The area of Linux sandboxing has seen various developments in recent years with the intro- duction of operating system containers and the ever present need to harden the security of applications. Two of the more prominent technologies that have been used when creating sandboxes are namespaces and system call filters. Whilst these technologies have been ef- fective for creating sandboxes, they are limited in that they require a developer to integrate them into their software. This work proposes to use these two technologies to enforce the Principle of Least Privilege on every process on a system. The solution extends a grsecurity hardened Linux kernel and allows the user to define security policies for each process which permit them to behave as intended. The presented results demonstrate the effectiveness of the extended Linux kernel and its impact on performance. The results provide a basis that may be built upon to deliver a comprehensive solution that would be appealing for use in real world environments. 1 Contents Abstract 1 Index of Figures 4 Index of Tables 5 1 Introduction 1 2 Context and Problem Description 3 2.1 Linux .
    [Show full text]
  • Unbreakable Enterprise Kernel Release Notes for Unbreakable Enterprise Kernel Release 3
    Unbreakable Enterprise Kernel Release Notes for Unbreakable Enterprise Kernel Release 3 E48380-10 June 2020 Oracle Legal Notices Copyright © 2013, 2020, Oracle and/or its affiliates. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software" or "commercial computer software documentation" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract.
    [Show full text]
  • About Security Solutions in Fog Computing
    “Ovidius” University Annals, Economic Sciences Series Volume XVI, Issue 1/2016 About Security Solutions in Fog Computing Eugen Petac Faculty of Mathematics and Computer Science “Ovidius” University of Constanța, Romania [email protected] Andreea-Oana Petac Faculty of Mathematics and Computer Science “Ovidius” University of Constanța, Romania [email protected] Abstract The key for improving a system's performance, its security and reliability is to have the data processed locally in remote data centers. Fog computing extends cloud computing through its services to devices and users at the edge of the network. Through this paper it is explored the fog computing environment. Security issues in this area are also described. Fog computing provides the improved quality of services to the user by complementing shortages of cloud in IoT (Internet of Things) environment. Our proposal, named Adaptive Fog Computing Node Security Profile (AFCNSP), which is based security Linux solutions, will get an improved security of fog node with rich feature sets. Key words: Fog Computing, IoT, Fog Computing Security J.E.L. classification: L8, M1, M3 1. Introduction Fog computing is a modern computing paradigm, representing distributed computing services, applications, access to pieces of information and various storage data, the user not needing to know the physical configurations for the systems that provide these services. This new technology is based on the tendency of cutting out the costs of the delivery services and increasing the dexterity of the deployment of the services. Utilizing this distributed computing concept, the services can be hosted at end devices (e.g. access points), creating an automated response that drives the value.
    [Show full text]
  • Linux, Yocto and Fpgas
    Embedded Open Source Experts Linux, Yocto and FPGAs Integrating Linux and Yocto builds into different SoCs From a Linux software perspective: ➤ Increased demand for Linux on FPGAs ➤ Many things to mange, both technical and practical ➤ FPGAs with integrated CPU cores – very similar many other SoCs Here are some experiences and observations... © Codiax 2019 ● Page 2 Why use Linux? ➤ De-facto standard ➤ Huge HW support ➤ FOSS ➤ Flexible ➤ Adaptable ➤ Stable ➤ Scalable ➤ Royalty free ➤ Vendor independent ➤ Large community ➤ Long lifetime Why not Linux? ➤ Too big ➤ Real-time requirements ➤ Certification ➤ Boot time ➤ Licensing ➤ Too open? Desktop Shells: Desktop Display server: Display BrailleDisplay Touch-Screen Mouse & Keyboard Wayland Compositor Wayland + development tools = a lot code!of source Linux system example weston, clayton,mutter,KWin evdev libinput GNOME Shell D radeon nouveau lima etna_viv freedreno tegra-re lima nouveau radeon freedreno etna_viv e libwayland-server libwayland-server s Cinnamon k t o kms p Linux kernel, Linux kernel, Plasma 2 w i (Kernel Mode Setting) Mode (Kernel d g Cairo-Dock e t s drm (Direct Rendering Manager) Rendering (Direct drm cache coherent L2-Caches L2-Caches cache coherent CPU &GPU Enlight. DR19 System libraries: System oflibraries): form (in the Toolkits Interface User µClibc Pango glibc glibc main memory possibly adaptations to Wayland/Mir libwayland / COGL libwayland Cairo Cairo (Xr) GTK+ Clutter 2D Application 2D GModule GThread GThread GLib GObject Glib GIO ATK devicedrivers other& modules System
    [Show full text]
  • Daemon Management Under Systemd ZBIGNIEWSYSADMIN JĘDRZEJEWSKI-SZMEK and JÓHANN B
    Daemon Management Under Systemd ZBIGNIEWSYSADMIN JĘDRZEJEWSKI-SZMEK AND JÓHANN B. GUÐMUNDSSON Zbigniew Jędrzejewski-Szmek he systemd project is the basic user-space building block used to works in a mixed experimental- construct a modern Linux OS. The main daemon, systemd, is the first computational neuroscience lab process started by the kernel, and it brings up the system and acts as and writes stochastic simulators T and programs for the analysis a service manager. This article shows how to start a daemon under systemd, of experimental data. In his free time he works describes the supervision and management capabilities that systemd pro- on systemd and the Fedora Linux distribution. vides, and shows how they can be applied to turn a simple application into [email protected] a robust and secure daemon. It is a common misconception that systemd is somehow limited to desktop distributions. This is hardly true; similarly to Jóhann B. Guðmundsson, the Linux kernel, systemd supports and is used on servers and desktops, but Penguin Farmer, IT Fireman, Archer, Enduro Rider, Viking- it is also in the cloud and extends all the way down to embedded devices. In Reenactor, and general general it tries to be as portable as the kernel. It is now the default on new insignificant being in an installations in Debian, Ubuntu, Fedora/RHEL/CentOS, OpenSUSE/SUSE, insignificant world, living in the middle of the Arch, Tizen, and various derivatives. North Atlantic on an erupting rock on top of the world who has done a thing or two in Systemd refers both to the system manager and to the project as a whole.
    [Show full text]
  • Smack Labeled
    SP Project 2 Basic SMACK features 1 Tizen project flow Tizen dev. environment Project 0 build Tizen Porting to Odroid-U3 Tizen application Project 2 development Basic SMACK Project 1 features Tizen web application Tizen development security Project 3 : SMACK SMACK security rule modify Tizen platform Project 4 development New SMACK rules Linux kernel development 2 Tizen Security Model . Non-root applications • All applications run under same non-root user ID . Application sandboxing • All applications are sandboxed by Smack . Resource access control • Important system objects are Smack labeled . Least privilege • All applications will have manifest file describing permissions 3 Tizen Security Model . Mandatory access control powered by Smack • Each application is Smack labeled and has proper Smack rules − Assigned and maintained by manifest file from each package • Application based sandboxing − Each application is able to write to home directory only . SMACK (Simple Mandatory Access Control Kernel) • Upstream Linux Security Module • Simple {subject, object, permission} access control model 4 SMACK . Units SMACK • Subject, Object, Access permission • Subject: processes • Object: processes, files . Rules SMACK • SMACK rule files in /opt/etc/smack/ • Rule format (subject) (object) (access permission) When (subject) accesses (object), access between them follows (access permission) E.g.: When (process) accesses (file), read only is permitted reference site #1 reference site #2 5 SMACK Application Web Application Native Application Web Framework
    [Show full text]
  • Tizen Overview & Architecture
    Tizen Overview & Architecture 삼성전자 정진민 There are many smart devices in mobile market. 2 And, almost as many software platforms for them 3 Many smart devices also appear in non-mobile market 4 User Expectation by it • Before smart device, • The user knew that they were different. • Therefore, the user did not expect anything among them. • Now, • The user is expecting anything among them. • However, They provide different applications and user experiences • Disappointed about inconvenient and incomplete continuation between them. 1 Due to different and proprietary software platform Proprietary platforms 5 Why do they? • Why could not manufacturers provide the same platform for their devices? • The platform has been designed for a specific embedded device. • Manufacturers do not want to share their proprietary platforms. • There is no software platform considering cross category devices as well as fully Open Source. Proprietary platforms 6 What if there is.. • What if there is a standard-based, cross category platform? • The same software can run on many categories of devices with few or no changes • Devices can be connected more easily and provide better convergence services to users • What if the platform is Open Source? • Manufacturers can deploy the platform on their products easily • New features/services can be added without breaking [given the software complies to platform standards] 7 The platform having these two features is ü Standard-based, Cross Category Platform ü Fully Open Source Platform 8 Standard-based, cross category platform
    [Show full text]
  • The Simplified Mandatory Access Control Kernel
    The Simplified Mandatory Access Control Kernel Casey Schaufler [email protected] Mandatory Access Control Computer systems employ a variety of schemes to constrain how information is shared among the people and services using the machine. Some of these schemes allow the program or user to decide what other programs or users are allowed access to pieces of data. These schemes are called discretionary access control mechanisms because the access control is specified at the discretion of the user. Other schemes do not leave the decision regarding what a user or program can access up to users or programs. These schemes are called mandatory access control mechanisms because you don’t have a choice regarding the users or programs that have access to pieces of data. Bell & LaPadula From the middle of the 1980’s until the turn of the century Mandatory Access Control (MAC) was very closely associated with the Bell & LaPadula security model, a mathematical description of the United States Department of Defense policy for marking paper documents. MAC in this form enjoyed a following within the Capital Beltway and Scandinavian supercomputer centers but was often sited as failing to address general needs. Domain Type Enforcement Around the turn of the century Domain Type Enforcement (DTE) became popular. This scheme organizes users, programs, and data into domains that are protected from each other. This scheme has been widely deployed as a component of popular Linux distributions. The administrative overhead required to maintain this scheme and the detailed understanding of the whole system necessary to provide a secure domain mapping leads to the scheme being disabled or used in limited ways in the majority of cases.
    [Show full text]
  • Leveraging Docker in Automotive Projects Based on AGL/GENIVI
    Leveraging Docker in Automotive projects based on AGL/GENIVI Stéphane Desneux CTO at IoT.bzh <[email protected]> IoT.bzh ● Specialized on Embedded & IoT ● Contributing to AGL Project for Renesas ● Expertise domains: ± System architecture ± Security ± Application Framework ± Graphics & Multimedia ± Middleware ± Linux Kernel ● Located in Brittany, France Jan 30, 2016 2 Agenda ● Light virtualization ● Containers for BSP builds ● Containers for Applications SDK ● Containers for CI & LTS ● Demo: AGL SDK for Renesas Porter board ● Limitations & Future enhancements ● Q&A Jan 30, 2016 3 Light Virtualization [LV] Jan 30, 2016 4 Light Virtualization ● Opposed to “Full Virtualization” which emulates a full machine (hardware + OS) ● A light virtual machine is also called a “container”: this is a kind of chroot(2)with some extra features ● A container runs its own processes based on its own binaries and libraries. But it relies on the Linux Kernel running on the host machine. ● Uses Linux namespaces to isolate the virtual system from the host system see unshare(2) Jan 30, 2016 5 LV: what's hype ? Some software related to LV: ● Docker ● Rocket (CoreOS) ● Open Container Initiative ● OpenVZ ● LXC / LXD (Ubuntu) Jan 30, 2016 6 LV: historical usages ● Historically used for easy deployment of Cloud services ± very fast startup (compared to full virtualization) ± low overhead (less memory used, less storage) ± better load balancing ± optimized hardware resources usage ● Some security models also use containers to provide isolation for multiple resources (filesystem,
    [Show full text]