Ubuntu Overlayfs Local Privesc Vulnerability
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Rootless Containers with Podman and Fuse-Overlayfs
CernVM Workshop 2019 (4th June 2019) Rootless containers with Podman and fuse-overlayfs Giuseppe Scrivano @gscrivano Introduction 2 Rootless Containers • “Rootless containers refers to the ability for an unprivileged user (i.e. non-root user) to create, run and otherwise manage containers.” (https://rootlesscontaine.rs/ ) • Not just about running the container payload as an unprivileged user • Container runtime runs also as an unprivileged user 3 Don’t confuse with... • sudo podman run --user foo – Executes the process in the container as non-root – Podman and the OCI runtime still running as root • USER instruction in Dockerfile – same as above – Notably you can’t RUN dnf install ... 4 Don’t confuse with... • podman run --uidmap – Execute containers as a non-root user, using user namespaces – Most similar to rootless containers, but still requires podman and runc to run as root 5 Motivation of Rootless Containers • To mitigate potential vulnerability of container runtimes • To allow users of shared machines (e.g. HPC) to run containers without the risk of breaking other users environments • To isolate nested containers 6 Caveat: Not a panacea • Although rootless containers could mitigate these vulnerabilities, it is not a panacea , especially it is powerless against kernel (and hardware) vulnerabilities – CVE 2013-1858, CVE-2015-1328, CVE-2018-18955 • Castle approach : it should be used in conjunction with other security layers such as seccomp and SELinux 7 Podman 8 Rootless Podman Podman is a daemon-less alternative to Docker • $ alias -
In Search of the Ideal Storage Configuration for Docker Containers
In Search of the Ideal Storage Configuration for Docker Containers Vasily Tarasov1, Lukas Rupprecht1, Dimitris Skourtis1, Amit Warke1, Dean Hildebrand1 Mohamed Mohamed1, Nagapramod Mandagere1, Wenji Li2, Raju Rangaswami3, Ming Zhao2 1IBM Research—Almaden 2Arizona State University 3Florida International University Abstract—Containers are a widely successful technology today every running container. This would cause a great burden on popularized by Docker. Containers improve system utilization by the I/O subsystem and make container start time unacceptably increasing workload density. Docker containers enable seamless high for many workloads. As a result, copy-on-write (CoW) deployment of workloads across development, test, and produc- tion environments. Docker’s unique approach to data manage- storage and storage snapshots are popularly used and images ment, which involves frequent snapshot creation and removal, are structured in layers. A layer consists of a set of files and presents a new set of exciting challenges for storage systems. At layers with the same content can be shared across images, the same time, storage management for Docker containers has reducing the amount of storage required to run containers. remained largely unexplored with a dizzying array of solution With Docker, one can choose Aufs [6], Overlay2 [7], choices and configuration options. In this paper we unravel the multi-faceted nature of Docker storage and demonstrate its Btrfs [8], or device-mapper (dm) [9] as storage drivers which impact on system and workload performance. As we uncover provide the required snapshotting and CoW capabilities for new properties of the popular Docker storage drivers, this is a images. None of these solutions, however, were designed with sobering reminder that widespread use of new technologies can Docker in mind and their effectiveness for Docker has not been often precede their careful evaluation. -
MINCS - the Container in the Shell (Script)
MINCS - The Container in the Shell (script) - Masami Hiramatsu <[email protected]> Tech Lead, Linaro Ltd. Open Source Summit Japan 2017 LEADING COLLABORATION IN THE ARM ECOSYSTEM Who am I... Masami Hiramatsu - Linux kernel kprobes maintainer - Working for Linaro as a Tech Lead LEADING COLLABORATION IN THE ARM ECOSYSTEM Demo # minc top # minc -r /opt/debian/x86_64 # minc -r /opt/debian/arm64 --arch arm64 LEADING COLLABORATION IN THE ARM ECOSYSTEM What Is MINCS? My Personal Fun Project to learn how linux containers work :-) LEADING COLLABORATION IN THE ARM ECOSYSTEM What Is MINCS? Mini Container Shell Scripts (pronounced ‘minks’) - Container engine implementation using POSIX shell scripts - It is small (~60KB, ~2KLOC) (~20KB in minimum) - It can run on busybox - No architecture dependency (* except for qemu/um mode) - No need for special binaries (* except for libcap, just for capsh --exec) - Main Features - Namespaces (Mount, PID, User, UTS, Net*) - Cgroups (CPU, Memory) - Capabilities - Overlay filesystem - Qemu cross-arch/system emulation - User-mode-linux - Image importing from dockerhub And all are done by CLI commands :-) LEADING COLLABORATION IN THE ARM ECOSYSTEM Why Shell Script? That is my favorite language :-) - Easy to understand for *nix administrators - Just a bunch of commands - Easy to modify - Good for prototyping - Easy to deploy - No architecture dependencies - Very small - Able to run on busybox (+ libcap is perfect) LEADING COLLABORATION IN THE ARM ECOSYSTEM MINCS Use-Cases For Learning - Understand how containers work For Development - Prepare isolated (cross-)build environment For Testing - Test new applications in isolated environment - Test new kernel features on qemu using local tools For products? - Maybe good for embedded devices which has small resources LEADING COLLABORATION IN THE ARM ECOSYSTEM What Is A Linux Container? There are many linux container engines - Docker, LXC, rkt, runc, .. -
High Velocity Kernel File Systems with Bento
High Velocity Kernel File Systems with Bento Samantha Miller, Kaiyuan Zhang, Mengqi Chen, and Ryan Jennings, University of Washington; Ang Chen, Rice University; Danyang Zhuo, Duke University; Thomas Anderson, University of Washington https://www.usenix.org/conference/fast21/presentation/miller This paper is included in the Proceedings of the 19th USENIX Conference on File and Storage Technologies. February 23–25, 2021 978-1-939133-20-5 Open access to the Proceedings of the 19th USENIX Conference on File and Storage Technologies is sponsored by USENIX. High Velocity Kernel File Systems with Bento Samantha Miller Kaiyuan Zhang Mengqi Chen Ryan Jennings Ang Chen‡ Danyang Zhuo† Thomas Anderson University of Washington †Duke University ‡Rice University Abstract kernel-level debuggers and kernel testing frameworks makes this worse. The restricted and different kernel programming High development velocity is critical for modern systems. environment also limits the number of trained developers. This is especially true for Linux file systems which are seeing Finally, upgrading a kernel module requires either rebooting increased pressure from new storage devices and new demands the machine or restarting the relevant module, either way on storage systems. However, high velocity Linux kernel rendering the machine unavailable during the upgrade. In the development is challenging due to the ease of introducing cloud setting, this forces kernel upgrades to be batched to meet bugs, the difficulty of testing and debugging, and the lack of cloud-level availability goals. support for redeployment without service disruption. Existing Slow development cycles are a particular problem for file approaches to high-velocity development of file systems for systems. -
Singularityce User Guide Release 3.8
SingularityCE User Guide Release 3.8 SingularityCE Project Contributors Aug 16, 2021 CONTENTS 1 Getting Started & Background Information3 1.1 Introduction to SingularityCE......................................3 1.2 Quick Start................................................5 1.3 Security in SingularityCE........................................ 15 2 Building Containers 19 2.1 Build a Container............................................. 19 2.2 Definition Files.............................................. 24 2.3 Build Environment............................................ 35 2.4 Support for Docker and OCI....................................... 39 2.5 Fakeroot feature............................................. 79 3 Signing & Encryption 83 3.1 Signing and Verifying Containers.................................... 83 3.2 Key commands.............................................. 88 3.3 Encrypted Containers.......................................... 90 4 Sharing & Online Services 95 4.1 Remote Endpoints............................................ 95 4.2 Cloud Library.............................................. 103 5 Advanced Usage 109 5.1 Bind Paths and Mounts.......................................... 109 5.2 Persistent Overlays............................................ 115 5.3 Running Services............................................. 118 5.4 Environment and Metadata........................................ 129 5.5 OCI Runtime Support.......................................... 140 5.6 Plugins................................................. -
Container-Based Virtualization for Byte-Addressable NVM Data Storage
2016 IEEE International Conference on Big Data (Big Data) Container-Based Virtualization for Byte-Addressable NVM Data Storage Ellis R. Giles Rice University Houston, Texas [email protected] Abstract—Container based virtualization is rapidly growing Storage Class Memory, or SCM, is an exciting new in popularity for cloud deployments and applications as a memory technology with the potential of replacing hard virtualization alternative due to the ease of deployment cou- drives and SSDs as it offers high-speed, byte-addressable pled with high-performance. Emerging byte-addressable, non- volatile memories, commonly called Storage Class Memory or persistence on the main memory bus. Several technologies SCM, technologies are promising both byte-addressability and are currently under research and development, each with dif- persistence near DRAM speeds operating on the main memory ferent performance, durability, and capacity characteristics. bus. These new memory alternatives open up a new realm of These include a ReRAM by Micron and Sony, a slower, but applications that no longer have to rely on slow, block-based very large capacity Phase Change Memory or PCM by Mi- persistence, but can rather operate directly on persistent data using ordinary loads and stores through the cache hierarchy cron and others, and a fast, smaller spin-torque ST-MRAM coupled with transaction techniques. by Everspin. High-speed, byte-addressable persistence will However, SCM presents a new challenge for container-based give rise to new applications that no longer have to rely on applications, which typically access persistent data through slow, block based storage devices and to serialize data for layers of block based file isolation. -
A Novel Scheduling Framework Leveraging Hardware Cache Partitioning for Cache-Side-Channel Elimination in Clouds
A Novel Scheduling Framework Leveraging Hardware Cache Partitioning for Cache-Side-Channel Elimination in Clouds Read Sprabery Konstantin Evchenko Abhilash Raj University of Illinois, University of Illinois, Oregon State University Urbana-Champaign Urbana-Champaign [email protected] [email protected] [email protected] Rakesh B. Bobba Sibin Mohan Roy H. Campbell Oregon State University University of Illinois, University of Illinois, [email protected] Urbana-Champaign Urbana-Champaign [email protected] [email protected] ABSTRACT on the Last-Level-Cache (LLC) that is shared across multiple cores While there exist many isolation mechanisms that are available [18, 23, 43, 44, 48] – these make defenses much harder. to cloud service providers, including virtual machines, containers, Many defenses against cache-side-channel attacks in cloud envi- etc. , the problem of side-channel increases in importance as a re- ronments have also been proposed (e.g., [6, 19, 21, 22, 24, 25, 28, 31, maining security vulnerability – particularly in the presence of 32, 37–40, 45, 51]). However, the proposed solutions suer from a shared caches and multicore processors. In this paper we present variety of drawbacks: (a) some are probabilistic [25, 37, 51]; (b) oth- a hardware-software mechanism that improves the isolation of ers do not protect applications when SMT is enabled [51]; (c) some cloud processes in the presence of shared caches on multicore require developers to re-write applications [19, 22, 31], (d) while chips. Combining the Intel CAT architecture that enables cache others require hardware changes [39, 40] impacting deployability; partitioning on the y with novel scheduling techniques and state (e) some depend on violating x86 semantics by modifying the reso- cleansing mechanisms, we enable cache-side-channel free comput- lution, accuracy or availability of timing instructions [21, 24, 38] ing for Linux-based containers and virtual machines, in particular, and consequently require signicant changes to the applications. -
An Incremental Path Towards a Safer OS Kernel
An Incremental Path Towards a Safer OS Kernel Jialin Li Samantha Miller Danyang Zhuo University of Washington University of Washington Duke University Ang Chen Jon Howell Thomas Anderson Rice University VMware Research University of Washington LoC Abstract Incremental Progress Tens of Linux Safe Linux Linux has become the de-facto operating system of our age, Millions FreeBSD but its vulnerabilities are a constant threat to service availabil- Hundreds of Singularity ity, user privacy, and data integrity. While one might scrap Thousands Biscuit Linux and start over, the cost of that would be prohibitive due Theseus Thousands RedLeaf seL4 to Linux’s ubiquitous deployment. In this paper, we propose Hyperkernel Safety an alternative, incremental route to a safer Linux through No Type Ownership Functional proper modularization and gradual replacement module by Guarantees Safety Safety Verification module. We lay out the research challenges and potential Figure 1: Our vision and the current state of systems. solutions for this route, and discuss the open questions ahead. security vulnerabilities are reported each year, and lifetime CCS Concepts analysis suggests that the new code added this year has intro- duced tens of thousands of more bugs. ! • Software and its engineering Software verification; While operating systems are far from the only source of se- ! • Computer systems organization Reliability. curity vulnerabilities, it is hard to envision building trustwor- Keywords thy computer systems without addressing operating system correctness. kernel safety, verified systems, reliable systems One attractive option is to scrap Linux and start over. Many ACM Reference Format: past works focus on building more secure and correct op- Jialin Li, Samantha Miller, Danyang Zhuo, Ang Chen, Jon Howell, erating systems from the ground up: ones based on strong and Thomas Anderson. -
Koller Dan Williams IBM T
An Ounce of Prevention is Worth a Pound of Cure: Ahead-of-time Preparation for Safe High-level Container Interfaces Ricardo Koller Dan Williams IBM T. J. Watson Research Center Abstract as the container filesystem is already layered at the same file- based granularity as images. Containers continue to gain traction in the cloud as However, relying on the host to implement the filesystem lightweight alternatives to virtual machines (VMs). This abstraction is not without cost: the filesystem abstraction is is partially due to their use of host filesystem abstractions, high-level, wide, and complex, providing an ample attack which play a role in startup times, memory utilization, crash surface to the host. Bugs in the filesystem implementation consistency, file sharing, host introspection, and image man- can be exploitedby a user to crash or otherwise get controlof agement. However, the filesystem interface is high-level and the host, resulting in (at least) a complete isolation failure in wide, presenting a large attack surface to the host. Emerg- a multi-tenant environment such as a container-based cloud. ing secure container efforts focus on lowering the level A search on the CVE database [13] for kernel filesystem vul- of abstraction of the interface to the host through deprivi- nerabilities provides a sobering reminder of the width, com- leged functionality recreation (e.g., VMs, userspace kernels). plexity, and ultimately danger of filesystem interfaces. However, the filesystem abstraction is so important that some have resorted to directly exposing it from the host instead of Efforts to improve the security of containers use low-level suffering the resulting semantic gap. -
ODROID-HC2: 3.5” High Powered Storage February 1, 2018
ODROID WiFi Access Point: Share Files Via Samba February 1, 2018 How to setup an ODROID with a WiFi access point so that an ODROID’s hard drive can be accessed and modied from another computer. This is primarily aimed at allowing access to images, videos, and log les on the ODROID. ODROID-HC2: 3.5” High powered storage February 1, 2018 The ODROID-HC2 is an aordable mini PC and perfect solution for a network attached storage (NAS) server. This device home cloud-server capabilities centralizes data and enables users to share and stream multimedia les to phones, tablets, and other devices across a network. It is an ideal tool for many use Using SquashFS As A Read-Only Root File System February 1, 2018 This guide describes the usage of SquashFS PiFace: Control and Display 2 February 1, 2018 For those who have the PiFace Control and Display 2, and want to make it compatible with the ODROID-C2 Android Gaming: Data Wing, Space Frontier, and Retro Shooting – Pixel Space Shooter February 1, 2018 Variations on a theme! Race, blast into space, and blast things into pieces that are racing towards us. The fun doesn’t need to stop when you take a break from your projects. Our monthly pick on Android games. Linux Gaming: Saturn Games – Part 1 February 1, 2018 I think it’s time we go into a bit more detail about Sega Saturn for the ODROID-XU3/XU4 Gaming Console: Running Your Favorite Games On An ODROID-C2 Using Android February 1, 2018 I built a gaming console using an ODROID-C2 running Android 6 Controller Area Network (CAN) Bus: Implementation -
Implementing a Parallel World Model Using Linux Containers for Efficient System Administration
Implementing a parallel world model using Linux containers for efficient system administration (Paper) Yasushi Shinjo and Wataru Ishida Jinpeng Wei Department of Computer Science School of Computing and Information Sciences University of Tsukuba Florida International University 1-1-1 Tennoudai, Tsukuba, Ibaraki 305-8573, Japan Miami, Florida, USA Email: [email protected] Abstract—This paper describes the implementation of a par- software updates. However, this method does not help to run allel world model using Linux containers. A parallel world the production environment while testing. (or parallel universe) model allows a user to create multiple To address these problems, we propose providing a parallel execution environments, called worlds, in a single operating system and to manipulate these worlds. This model enables world model at the operating system level. A parallel world or a system administrator to create a new test world that looks parallel universe model is a model to describe time machines like a production world. The system administrator upgrades in science fiction (SF) and to interpret observations in quantum fundamental software and tests applications in the new test physics. In our parallel world model at the operating system world while running the production world. If the applications level, a world is a container of files and an execution environ- do not pass the tests, the administrator deletes the new world. If the applications pass the tests, the administrator merges the ment of processes. Using this parallel world model, a system test world into the production world. Prior to the merge, the administrator can upgrade fundamental software, as follows. -
The Virtual Filesystem Interface in 4.4BSDI
The Virtual Filesystem Interface in 4.4BSDI Marshall Kirk McKusick Consultant and Author Berkeley, California ABSTRACT: This paper describes the virtual filesys- tem interface found in 4.4BSD. This interface is de- signed around an object oriented virtual file node or "vnode" data structure. The vnode structure is de- scribed along with its method for dynamically expand- ing its set of operations. These operations have been divided into two groups: those to manage the hierarchi- cal filesystem name space and those to manage the flat filestore. The translation of pathnames is described, as it requires a tight coupling between the virtual filesys- tem layer and the underþing filesystems through which the path traverses. This paper describes the filesystem services that are exported from the vnode interface to its clients, both local and remote. It also describes the set of services provided by the vnode layer to its client filesystems. The vnode interface has been generalized to allow multiple filesystems to be stacked together. After describing the stacking functionality, several examples of stacking filesystems are shown. t To appear in The Design and Implementation of the 4.4BSD Operating System, by Marshall Kirk McKusick, publisher. et al., @1995 by Addison-Wesley Publishing Companf Inc. Reprinted with the permission of the o 1995 The USENIX Association, Computing Systems, Vol. 8 ' No. 1 ' Winter 1995 I. The Virtual Filesystem Interface In early UNIX systems, the file entries directly referenced the local filesystem inode, see Figure I [Leffler et al. 1989]. This approach worked fine when there was a single filesystem implementation. However, with the advent of multþle filesystem types, the architecture had to be generalized.