How Containers Work Sasha Goldshtein @Goldshtn CTO, Sela Group Github.Com/Goldshtn

Total Page:16

File Type:pdf, Size:1020Kb

How Containers Work Sasha Goldshtein @Goldshtn CTO, Sela Group Github.Com/Goldshtn Linux How Containers Work Sasha Goldshtein @goldshtn CTO, Sela Group github.com/goldshtn Agenda • Why containers? • Container building blocks • Security profiles • Namespaces • Control groups • Building tiny containers Hardware Virtualization vs. OS Virtualization VM Container ASP.NET Express ASP.NET Express RavenDB Nginx RavenDB Nginx App App App App .NET Core .NET Core V8 libC .NET Core .NET Core V8 libC Ubuntu Ubuntu Ubuntu RHEL Ubuntu Ubuntu Ubuntu RHEL Linux Linux Linux Linux Linux Kernel Kernel Kernel Kernel Kernel Hypervisor Hypervisor DupliCated Shared (image layer) Key Container Usage Scenarios • Simplifying configuration: infrastructure as code • Consistent environment from development to production • Application isolation without an extra copy of the OS • Server consolidation, thinner deployments • Rapid deployment and scaling Linux Container Building Blocks User Namespaces node:6 ms/aspnetcore:2 openjdk:8 Copy-on- /app/server.js /app/server.dll /app/server.jar write layered /usr/bin/node /usr/bin/dotnet /usr/bin/javaControl groups filesystems /lib64/libc.so /lib64/libc.so /lib64/libc.so 1 CPU 2 CPU 1 CPU 1 GB 4 GB 2 GB syscall Kernel Security profile Security Profiles • Docker supports additional security modules that restrict what containerized processes can do • AppArmor: text-based configuration that restricts access to certain files, network operations, etc. • Docker uses seccomp to restrict system calls performed by the containerized process • The default seccomp profile blocks syscalls like reboot, stime, umount, specific types of socket protocols, and others • See also: Docker seccomp profile, AppArmor for Docker Experiment: Using a Custom Profile (1/4) host% docker run -d --rm microsoft/dotnet:runtime …app host% docker exec -it $CID bash cont# apt update && apt install linux-perf cont# /usr/bin/perf_4.9 record -F 97 -a ... perf_event_open(..., 0) failed unexpectedly ... Error: You may not have permission to collect system-wide stats. Consider tweaking /proc/sys/kernel/perf_event_paranoid, ... Experiment: Using a Custom Profile (2/4) host% echo 0 | sudo tee /proc/sys/kernel/perf_event_paranoid 0 cont# /usr/bin/perf_4.9 record -F 97 -ag ... perf_event_open(..., 0) failed unexpectedly ... Error: You may not have permission to collect system-wide stats. Consider tweaking /proc/sys/kernel/perf_event_paranoid, ... Experiment: Using a Custom Profile (3/4) host% diff --color=always --context=2 default.json perf.json *** default.json 2018-04-18 06:06:19.612371527 +0000 --- perf.json 2018-04-18 06:06:38.423793797 +0000 *************** *** 217,220 **** --- 217,221 ---- "openat", "pause", + "perf_event_open", "pipe", "pipe2", Experiment: Using a Custom Profile (4/4) host% docker run -d --security-opt seccomp=./perf.json microsoft/dotnet:runtime …app host% docker exec -it app sh -c 'apt update && \ apt install -y linux-perf && \ /usr/bin/perf_4.9 record \ -o /app/perf.data -F 97 -a -- sleep 5' ... [ perf record: Woken up 1 times to write data ] [ perf record: Captured and wrote 0.064 MB /app/perf.data (970 samples) ] Capabilities • Docker also disables capabilities such as CAP_SYS_ADMIN, CAP_SYS_PTRACE, CAP_SYS_BOOT, CAP_SYS_TIME which you can add if necessary • docker run --cap-add ... • Docker’s default seccomp policy is configured to also enable certain syscalls when the relevant capability is enabled • E.g. ptrace(2) when CAP_SYS_PTRACE is enabled • E.g. reboot(2) when CAP_SYS_BOOT is enabled • See also: capabilities(7) Namespaces • Namespaces isolate processes from each other; restrict visibility • PID: container gets its own PIDs • mnt: container gets its own mount points (view of the file system) • net: container gets its own network interfaces • user: container gets its own user and group ids • See also: unshare(2), setns(2), namespaces(7) Docker Process Architecture bash % docker run … dockerd docker-containerd ubuntu:xenialubuntu:xenial dockerdocker-containerd-containerd-shim-shim ubuntu:xenial# … docker-runccontainerd-shim ## …… runcrunc Experiment: Namespace System Calls host[1]% CONTAINERD=$(pgrep -o -f docker-containerd) host[1]% strace -f -eunshare -qq -p $CONTAINERD [pid 16274] unshare(CLONE_NEWNS|CLONE_NEWUTS|CLONE_NEWIPC| CLONE_NEWNET|CLONE_NEWPID) = 0 ... host[2]% docker run -it --rm ubuntu:xenial bash Experiment: Namespace Isolation (1/2) host% docker run -d --rm microsoft/dotnet:runtime …app host% docker run -d --rm microsoft/dotnet:runtime …app cont1# ls /proc | head -2 1 14 cont2# ls /proc | head -2 1 25 host% ps -e | grep simpleapp | grep -v grep 316 ? 00:00:00 simpleapp 32709 ? 00:00:00 simpleapp Experiment: Namespace Isolation (2/2) cont1# touch /tmp/file1 cont2# ls /tmp clr-debug-pipe-1-502146227-in clr-debug-pipe-1-502146227-out Experiment: Listing Namespaces (1/2) host% ls -l /proc/$PID/ns total 0 lrwxrwxrwx. 1 root root 0 Apr 17 10:07 cgroup -> cgroup:[4026531835] lrwxrwxrwx. 1 root root 0 Apr 17 10:03 ipc -> ipc:[4026532323] lrwxrwxrwx. 1 root root 0 Apr 17 10:03 mnt -> mnt:[4026532321] lrwxrwxrwx. 1 root root 0 Apr 17 10:02 net -> net:[4026532326] lrwxrwxrwx. 1 root root 0 Apr 17 10:03 pid -> pid:[4026532324] lrwxrwxrwx. 1 root root 0 Apr 17 10:07 pid_for_children -> ... lrwxrwxrwx. 1 root root 0 Apr 17 10:07 user -> user:[4026531837] lrwxrwxrwx. 1 root root 0 Apr 17 10:03 uts -> uts:[4026532322] Experiment: Listing Namespaces (2/2) host% lsns -t pid NS TYPE NPROCS PID USER COMMAND 4026531836 pid 92 1 root /usr/lib/systemd/systemd ... 4026532260 pid 1 32709 root /app/simpleapp 4026532324 pid 1 316 root /app/simpleapp host% lsns -p 316 NS TYPE NPROCS PID USER COMMAND ... 4026532321 mnt 1 316 root /app/simpleapp 4026532322 uts 1 316 root /app/simpleapp 4026532323 ipc 1 316 root /app/simpleapp 4026532324 pid 1 316 root /app/simpleapp 4026532326 net 1 316 root /app/simpleapp Experiment: Entering Namespaces (1/2) host% nsenter -t 316 -m ls -la /tmp total 16 ... host% strace -f -esetns nsenter -t 316 -a true setns(3, CLONE_NEWCGROUP) = 0 setns(4, CLONE_NEWIPC) = 0 setns(5, CLONE_NEWUTS) = 0 setns(6, CLONE_NEWNET) = 0 setns(7, CLONE_NEWPID) = 0 setns(8, CLONE_NEWNS) = 0 ... Experiment: Entering Namespaces (2/2) host[2]% docker exec $CID true host[1]% strace -f -esetns -qq -p $CONTAINERD [pid 712] setns(5, CLONE_NEWIPC) = 0 [pid 712] setns(8, CLONE_NEWUTS) = 0 [pid 712] setns(9, CLONE_NEWNET) = 0 [pid 712] setns(10, CLONE_NEWPID) = 0 [pid 712] setns(11, CLONE_NEWNS) = 0 ... Experiment: “Attaching” A Container (1/4) host% docker run -it -v $PWD:/src --rm microsoft/dotnet:2.1-sdk sh -c 'cd /src && dotnet new console' host% vim Program.cs host% docker run --rm -v $PWD:/src microsoft/dotnet:2.1-sdk sh -c 'cd /src && dotnet restore && dotnet publish -c Release -o ./out -r linux-x64' host% docker run --name app --rm -d -v $PWD/out:/app microsoft/dotnet:2.1-runtime-deps /app/simpleapp Experiment: “Attaching” A Container (2/4) host% sudo docker exec app ps OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"ps\": executable file not found in $PATH": unknown host% sudo docker exec app ls /proc 1 15 ... Experiment: “Attaching” A Container (3/4) host% sudo docker exec -t app strace -ewrite -p 15 OCI runtime exec failed: exec failed: container_linux.go:296: starting container process caused "exec: \"strace\": executable file not found in $PATH": unknown host% sudo docker exec -it app sh cont# apt update && apt install strace ... cont# strace -ewrite -p 15 strace: attach: ptrace(PTRACE_ATTACH, 15): Operation not permitted Experiment: “Attaching” A Container (4/4) host% docker run --rm -it --pid=container:app --cap-add=SYS_PTRACE alpine sh cont# apk add strace --no-cache ... cont# strace -ewrite -p 1 strace: Process 1 attached with 7 threads [pid 1] write(20, ".", 1) = 1 [pid 1] write(20, ".", 1) = 1 ^Cstrace: Process 1 detached See also: capabilities(7) Control Groups • Control groups place quotas on processes or process groups • cpu,cpuacct: used to cap CPU usage and apply CPU shares • docker run --cpus --cpu-shares • memory: used to cap user and kernel memory usage • docker run --memory --kernel-memory • blkio: used to cap IOPS and throughput per block device, and to assign weights (shares) • docker run --device-{read,write}-{ops,bps} • See also: cgroups(7) Experiment: Controlling CPU Usage host% docker run --rm -d --cpus=0.5 progrium/stress -c 4 host% top ... %CPU %MEM TIME+ COMMAND 12.7 0.0 0:09.74 stress 12.7 0.0 0:09.87 stress 12.7 0.0 0:09.79 stress 12.3 0.0 0:09.78 stress Experiment: Identifying Throttling (1/2) host% docker run --rm -d --cpus=0.5 progrium/stress -c 4 host% for pid in `pidof stress`; do echo $pid; \ grep nonvoluntary /proc/${pid}/status; done 12652 nonvoluntary_ctxt_switches: 12487 12651 nonvoluntary_ctxt_switches: 13920 12650 nonvoluntary_ctxt_switches: 14001 12649 nonvoluntary_ctxt_switches: 12489 12617 nonvoluntary_ctxt_switches: 31 Experiment: Identifying Throttling (2/2) host% CONTAINER=$(docker inspect --format='{{.Id}}' ...) host% CGROUPDIR=/sys/fs/cgroup/cpu,cpuacct/docker/${CONTAINER} host% cat ${CGROUPDIR}/cpu.cfs_period_us 100000 host% cat ${CGROUPDIR}/cpu.cfs_quota_us 50000 host% cat ${CGROUPDIR}/cpu.stat nr_periods 8354 nr_throttled 8352 throttled_time 1249595091213 Experiment: Controlling Memory Usage (1/2) host% docker run --name app -d --memory=128m ... host% cat /sys/fs/cgroup/memory/docker/.../memory.limit_in_bytes 134217728 Note: overcommit behavior and uncommitted memory dramatically complicate things. How much memory is your process really using? From /proc/$PID/status: From /sys/fs/cgroup/memory/.../memory.stat: VmPeak: 2533932 kB RssFile: 22912 kB cache 8192 VmSize: 2533560 kB RssShmem: 0 kB rss 5783552 VmLck: 4 kB VmData: 69012 kB rss_huge 0 VmPin: 0 kB VmStk: 132 kB shmem 8192 VmHWM: 28144 kB VmExe: 292 kB mapped_file 8192 VmRSS: 28144 kB VmLib: 56172 kB RssAnon: 5232 kB VmPTE: 304 kB VmSwap: 0 kB Experiment: Controlling Memory Usage (2/2) host% dmesg 131072kB, failcnt 8 [5027568.316065] simpleapp invoked oom-killer: ... ... [5027568.436555] Memory cgroup stats for [5027568.349751] oom_kill_process+0x218/0x420 /docker/fad6...: cache:8KB rss:129536KB [5027568.352827] out_of_memory+0x2ea/0x4f0 rss_huge:0KB shmem:8KB mapped_file:8KB ..
Recommended publications
  • Kernel Boot-Time Tracing
    Kernel Boot-time Tracing Linux Plumbers Conference 2019 - Tracing Track Masami Hiramatsu <[email protected]> Linaro, Ltd. Speaker Masami Hiramatsu - Working for Linaro and Linaro members - Tech Lead for a Landing team - Maintainer of Kprobes and related tracing features/tools Why Kernel Boot-time Tracing? Debug and analyze boot time errors and performance issues - Measure performance statistics of kernel boot - Analyze driver init failure - Debug boot up process - Continuously tracing from boot time etc. What We Have There are already many ftrace options on kernel command line ● Setup options (trace_options=) ● Output to printk (tp_printk) ● Enable events (trace_events=) ● Enable tracers (ftrace=) ● Filtering (ftrace_filter=,ftrace_notrace=,ftrace_graph_filter=,ftrace_graph_notrace=) ● Add kprobe events (kprobe_events=) ● And other options (alloc_snapshot, traceoff_on_warning, ...) See Documentation/admin-guide/kernel-parameters.txt Example of Kernel Cmdline Parameters In grub.conf linux /boot/vmlinuz-5.1 root=UUID=5a026bbb-6a58-4c23-9814-5b1c99b82338 ro quiet splash tp_printk trace_options=”sym-addr” trace_clock=global ftrace_dump_on_oops trace_buf_size=1M trace_event=”initcall:*,irq:*,exceptions:*” kprobe_event=”p:kprobes/myevent foofunction $arg1 $arg2;p:kprobes/myevent2 barfunction %ax” What Issues? Size limitation ● kernel cmdline size is small (< 256bytes) ● A half of the cmdline is used for normal boot Only partial features supported ● ftrace has too complex features for single command line ● per-event filters/actions, instances, histograms. Solutions? 1. Use initramfs - Too late for kernel boot time tracing 2. Expand kernel cmdline - It is not easy to write down complex tracing options on bootloader (Single line options is too simple) 3. Reuse structured boot time data (Devicetree) - Well documented, structured data -> V1 & V2 series based on this. Boot-time Trace: V1 and V2 series V1 and V2 series posted at June.
    [Show full text]
  • The Seeds of Rural Resilience
    NEWS & VIEWS FROM THE SUSTAINABLE SOUTHWEST Growing a Regional Food System THE SEEDS OF RURAL RESILIENCE October 2017 NORTHERN NEW MEXICO’S LARGEST DISTRIBUTION NEWSPAPER Vol. 9 No. 10 2 Green Fire Times • October 2017 www.GreenFireTimes.com Is Your Roof Winter Ready? Whether your roof is currently leaking or you’d like to restore your roof before it fails, Fix My Roof is the right choice. Call today for a free roof assessment! www.GreenFireTimes.com Green Fire Times • October 2017 3 YOU’LL LOVE WHAT YOU SEE! PROGRAM PARTNERS: FRIDAY SATURDAY OCT 27 NOV 14 7:30 PM 7:30 PM Sponsored by The L.A. Grow the Growers Browns Dance Farm Training 5 Project Business Incubation A CULTIVATING BERNALILLO COUNTY INITIATIVE bernalillo Applications for the 2018 Opencounty Space internships now available Lensic.org 505-988-1234 For more information NONPROFIT • COMMUNITY FUNDED SERVICE CHARGES APPLY AT ALL POINTS OF PURCHASE A special thanks to our www.bernco.gov/growthegrowers 2017/2018 sponsor: Find Your Future in ENGINEERING @Northern New Mexico College NORTHERN The most affordable 4-year now offering college in the Southwest classes at Santa Fe HEC! Northern Engineering programs include: n ABET-accredited Bachelor in INFORMATION ENGINEERING Tech (IET) n Ask about our new CYBERSECURITY concentration in IET Schedule your campus visit today! n Bachelor in ELECTROMECHANICAL Engineering/Solar Energy Concentration CALL 505.747.2111 or visit nnmc.edu n Associate of Applied Science degrees in RENEWABLE ENERGY and ELECTRICAL TECH 4 Green Fire Times Oc tober 2017 www.GreenFireTimes.com Vol. 9, No. 10 October 2017 Issue No.
    [Show full text]
  • Linux Perf Event Features and Overhead
    Linux perf event Features and Overhead 2013 FastPath Workshop Vince Weaver http://www.eece.maine.edu/∼vweaver [email protected] 21 April 2013 Performance Counters and Workload Optimized Systems • With processor speeds constant, cannot depend on Moore's Law to deliver increased performance • Code analysis and optimization can provide speedups in existing code on existing hardware • Systems with a single workload are best target for cross- stack hardware/kernel/application optimization • Hardware performance counters are the perfect tool for this type of optimization 1 Some Uses of Performance Counters • Traditional analysis and optimization • Finding architectural reasons for slowdown • Validating Simulators • Auto-tuning • Operating System optimization • Estimating power/energy in software 2 Linux and Performance Counters • Linux has become the operating system of choice in many domains • Runs most of the Top500 list (over 90%) on down to embedded devices (Android Phones) • Until recently had no easy access to hardware performance counters, limiting code analysis and optimization. 3 Linux Performance Counter History • oprofile { system-wide sampling profiler since 2002 • perfctr { widely used general interface available since 1999, required patching kernel • perfmon2 { another general interface, included in kernel for itanium, made generic, big push for kernel inclusion 4 Linux perf event • Developed in response to perfmon2 by Molnar and Gleixner in 2009 • Merged in 2.6.31 as \PCL" • Unusual design pushes most functionality into kernel
    [Show full text]
  • AD Bridge User Guide
    AD Bridge User Guide May 2019 Legal Notice © Copyright 2019 Micro Focus or one of its affiliates. The only warranties for products and services of Micro Focus and its affiliates and licensors (“Micro Focus”) are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. For additional information, such as certification-related notices and trademarks, see http://www.microfocus.com/about/legal/. Contents About This Guide 5 1 Getting Started 7 2 Installing AD Bridge 9 Linux Requirements and Supported Platforms . 9 Linux Requirements . 9 Supported Linux Platforms. 10 Installing the AD Bridge Linux Agent. 11 Licensing the Linux Agent . 12 Joining Active Directory - Post Installation . 13 Installing the AD Bridge GPEdit Extension . 13 3 Managing Linux GPO Settings 15 Accessing or Creating Group Policy Objects . 15 Configuring Linux GPO Settings . 16 Managing Linux Agent Services with GPOs . 17 Importing Custom Configuration File Settings. 18 Managing Linux Applications with GPOs . 18 Managing User Logins with GPOs . 19 Viewing Policy Injection on a Linux Agent. 20 A Appendix 21 Linux Agent GPO Settings . 21 Linux Agent Commands and Lookups . 22 GPO Best Practices . 23 Contents 3 4 About This Guide The AD Bridge User Guide provides information to help you understand, install, configure, and employ the Micro Focus AD Bridge product to help manage your enterprise environment. Audience This guide is written for administrators and users who will use Micro Focus AD Bridge to more effectively manage Active Directory and group policies in a cross-platform environment.
    [Show full text]
  • Linux on IBM Z
    Linux on IBM Z Pervasive Encryption with Linux on IBM Z: from a performance perspective Danijel Soldo Software Performance Analyst Linux on IBM Z Performance Evaluation _ [email protected] IBM Z / Danijel Soldo – Pervasive Encryption with Linux on IBM Z: from a performance perspective / © 2018 IBM Corporation Notices and disclaimers • © 2018 International Business Machines Corporation. No part of • Performance data contained herein was generally obtained in a this document may be reproduced or transmitted in any form controlled, isolated environments. Customer examples are without written permission from IBM. presented as illustrations of how those • U.S. Government Users Restricted Rights — use, duplication • customers have used IBM products and the results they may have or disclosure restricted by GSA ADP Schedule Contract with achieved. Actual performance, cost, savings or other results in IBM. other operating environments may vary. • Information in these presentations (including information relating • References in this document to IBM products, programs, or to products that have not yet been announced by IBM) has been services does not imply that IBM intends to make such products, reviewed for accuracy as of the date of initial publication programs or services available in all countries in which and could include unintentional technical or typographical IBM operates or does business. errors. IBM shall have no responsibility to update this information. This document is distributed “as is” without any warranty, • Workshops, sessions and associated materials may have been either express or implied. In no event, shall IBM be liable for prepared by independent session speakers, and do not necessarily any damage arising from the use of this information, reflect the views of IBM.
    [Show full text]
  • Linux Performance Tools
    Linux Performance Tools Brendan Gregg Senior Performance Architect Performance Engineering Team [email protected] @brendangregg This Tutorial • A tour of many Linux performance tools – To show you what can be done – With guidance for how to do it • This includes objectives, discussion, live demos – See the video of this tutorial Observability Benchmarking Tuning Stac Tuning • Massive AWS EC2 Linux cloud – 10s of thousands of cloud instances • FreeBSD for content delivery – ~33% of US Internet traffic at night • Over 50M subscribers – Recently launched in ANZ • Use Linux server tools as needed – After cloud monitoring (Atlas, etc.) and instance monitoring (Vector) tools Agenda • Methodologies • Tools • Tool Types: – Observability – Benchmarking – Tuning – Static • Profiling • Tracing Methodologies Methodologies • Objectives: – Recognize the Streetlight Anti-Method – Perform the Workload Characterization Method – Perform the USE Method – Learn how to start with the questions, before using tools – Be aware of other methodologies My system is slow… DEMO & DISCUSSION Methodologies • There are dozens of performance tools for Linux – Packages: sysstat, procps, coreutils, … – Commercial products • Methodologies can provide guidance for choosing and using tools effectively • A starting point, a process, and an ending point An#-Methodologies • The lack of a deliberate methodology… Street Light An<-Method 1. Pick observability tools that are: – Familiar – Found on the Internet – Found at random 2. Run tools 3. Look for obvious issues Drunk Man An<-Method • Tune things at random until the problem goes away Blame Someone Else An<-Method 1. Find a system or environment component you are not responsible for 2. Hypothesize that the issue is with that component 3. Redirect the issue to the responsible team 4.
    [Show full text]
  • List of New Applications Added in ARL #2517
    List of New Applications Added in ARL #2517 Application Name Publisher ActiveEfficiency 1.10 1E ACL Add-In 14.0 ACL Services ACL for Windows 14.2 ACL Services ACL for Windows 14.1 ACL Services Direct Link 7.5 ACL Services ACL Add-In 1.1 ACL Services Creative Cloud Connection 5 Adobe Experience Manager forms 6.5 Adobe Elements Auto Analyzer 12.0 Adobe Token Resolver 3.4 Adobe Token Resolver 3.6 Adobe LogTransport 1.6 Adobe LogTransport 2.4 Adobe IPC Broker 5.6 Adobe Data Workbench Adobe Token Resolver 3.5 Adobe Token Resolver 3.7 Adobe Dimension 3.2 Adobe Photo Downloader 8.0 Adobe LogTransport 2.2 Adobe GC Invoker Utility 4.5 Adobe GC Client 5.0 Adobe Crash Reporter 2.0 Adobe Crash Reporter 2.1 Adobe GC Invoker Utility 6.4 Adobe Dynamic Link Media Server 12.1 Adobe Token Resolver 3.3 Adobe Token Resolver 4.7 Adobe GC Client 4.4 Adobe Genuine Software Integrity Service 6.4 Adobe Creative Cloud Libraries 3 Adobe Token Resolver 3.9 Adobe Token Resolver 5.0 Adobe Genuine Software Integrity Service 6.5 Adobe Create PDF 17.1 Adobe Crash Reporter 1.5 Adobe Notification Client 4.9 Adobe GC Client 6.4 Adobe GC Client 6.5 Adobe Crash Reporter 1.6 Adobe Crash Reporter 2.2 Adobe Crash Reporter 2.4 Adobe GPU Sniffer 19.0 Adobe Token Generator 7.0 Adobe Token Resolver 3.8 Adobe LogTransport 1.5 Adobe InDesign Server CC (2020) Adobe GC Invoker Utility 5.0 Adobe GC Invoker Utility 6.5 Adobe RED Importer Plugin Unspecified Adobe Token Generator 8.0 Adobe GC Client 1.2 Adobe GC Client 4.5 Adobe EmailNotificationPlugin 11.0 Apple BatteryUIKit 1.0 Apple
    [Show full text]
  • Snap Vs Flatpak Vs Appimage: Know the Differences | Which Is Better
    Published on Tux Machines (http://www.tuxmachines.org) Home > content > Snap vs Flatpak vs AppImage: Know The Differences | Which is Better Snap vs Flatpak vs AppImage: Know The Differences | Which is Better By Rianne Schestowitz Created 08/12/2020 - 8:29pm Submitted by Rianne Schestowitz on Tuesday 8th of December 2020 08:29:48 PM Filed under Software [1] Every Linux distribution has its own package manager tool or command-line based repository system to update, install, remove, and manage packages on the system. Despite having a native package manager, sometimes you may need to use a third-party package manager on your Linux system to get the latest version of a package to avoid repository errors and server errors. In the entire post, we have seen the comparison between Snap, AppImage, and Flatpak. Snap, Flatpak, and AppImage; all have their pros and cons. In my opinion, I will always prefer the Flatpak package manager in the first place. If I can?t find any packages on Flatpak, then I?ll go for the AppImage. And finally, Snap is an excellent store of applications, but it still requires some development. I would go to the Snap store for proprietary or semi-proprietary applications than main applications. Please share it with your friends and the Linux community if you find this post useful and informative. Let us know which package manager do you prefer to use on your Linux system. You can write also write down your opinions regarding this post in the comment section. [2] Software Source URL: http://www.tuxmachines.org/node/145224 Links: [1] http://www.tuxmachines.org/taxonomy/term/38 [2] https://www.ubuntupit.com/snap-vs-flatpak-vs-appimage-know-the-difference/.
    [Show full text]
  • On Access Control Model of Linux Native Performance Monitoring Motivation
    On access control model of Linux native performance monitoring Motivation • socialize Perf access control management to broader community • promote the management to security sensitive production environments • discover demand on extensions to the existing Perf access control model 2 Model overview • Subjects: processes • Access control: • superuser root • LSM hooks for MAC (e.g. SELinux) subjects • privileged user groups • Linux capabilities (DAC) • unprivileged users • perf_event_paranoid sysctl access and • Objects: telemetry data • Resource control: • tracepoints, OS events resource control • CPU time: sample rate & throttling • CPU • Memory: perf_events_mlock_kb sysctl • Uncore Objects • Other HW • File descriptors: ulimit -n (RLIMIT_NOFILE) system • Scope • Level user cgroups • process • user mode • cgroups • kernel kernel • system process • hypervisor hypervisor 3 Subjects Users • root, superuser: • euid = 0 and/or CAP_SYS_ADMIN • unprivileged users: • perf_event_paranoid sysctl • Perf privileged user group: -rwxr-x--- 2 root perf_users 11M Oct 19 15:12 perf # getcap perf perf = cap_perfmon,…=ep root unprivileged Perf users 4 Telemetry, scope, level Telemetr Objects: SW, HW telemetry data y Uncore • tracepoints, OS events, eBPF • CPUs events and related HW CPU • Uncore events (LLC, Interconnect, DRAM) SW events • Other (e.g. FPGA) process cgroup user Scope: Level: system kernel • process • user hypervisor • cgroups • kernel Scope Leve • system wide • hypervisor l 5 perf: interrupt took too long (3000 > 2000), lowering kernel.perf_event_max_sample_rate
    [Show full text]
  • Ubuntu Server Guide Basic Installation Preparing to Install
    Ubuntu Server Guide Welcome to the Ubuntu Server Guide! This site includes information on using Ubuntu Server for the latest LTS release, Ubuntu 20.04 LTS (Focal Fossa). For an offline version as well as versions for previous releases see below. Improving the Documentation If you find any errors or have suggestions for improvements to pages, please use the link at thebottomof each topic titled: “Help improve this document in the forum.” This link will take you to the Server Discourse forum for the specific page you are viewing. There you can share your comments or let us know aboutbugs with any page. PDFs and Previous Releases Below are links to the previous Ubuntu Server release server guides as well as an offline copy of the current version of this site: Ubuntu 20.04 LTS (Focal Fossa): PDF Ubuntu 18.04 LTS (Bionic Beaver): Web and PDF Ubuntu 16.04 LTS (Xenial Xerus): Web and PDF Support There are a couple of different ways that the Ubuntu Server edition is supported: commercial support and community support. The main commercial support (and development funding) is available from Canonical, Ltd. They supply reasonably- priced support contracts on a per desktop or per-server basis. For more information see the Ubuntu Advantage page. Community support is also provided by dedicated individuals and companies that wish to make Ubuntu the best distribution possible. Support is provided through multiple mailing lists, IRC channels, forums, blogs, wikis, etc. The large amount of information available can be overwhelming, but a good search engine query can usually provide an answer to your questions.
    [Show full text]
  • Kernel Runtime Security Instrumentation Process Is Executed
    Kernel Runtime Security Instrumentation KP Singh Linux Plumbers Conference Motivation Security Signals Mitigation Audit SELinux, Apparmor (LSMs) Perf seccomp Correlation with It's bad, stop it! maliciousness but do not imply it Adding a new Signal Signals Mitigation Update Audit Audit (user/kernel) SELinux, Apparmor (LSMs) to log environment Perf variables seccomp Security Signals Mitigation Audit SELinux, Apparmor (LSMs) Perf seccomp Update the mitigation logic for a malicious actor with a known LD_PRELOAD signature Signals ● A process that executes and deletes its own executable. ● A Kernel module that loads and "hides" itself ● "Suspicious" environment variables. Mitigations ● Prevent mounting of USB drives on servers. ● Dynamic whitelist of known Kernel modules. ● Prevent known vulnerable binaries from running. How does it work? Why LSM? ● Mapping to security behaviours rather than the API. ● Easy to miss if instrumenting using syscalls (eg. execve, execveat) ● Benefit the LSM ecosystem by incorporating feedback from the security community. Run my code when a Kernel Runtime Security Instrumentation process is executed /sys/kernel/security/krsi/process_execution my_bpf_prog.o (bprm_check_security) bpf [BPF_PROG_LOAD] open [O_RDWR] securityfs_fd prog_fd bpf [BPF_PROG_ATTACH] LSM:bprm_check_security (when a process is executed) KRSI's Hook Other LSM Hooks Tying it all Together Reads events from the buffer and processes them Userspace further Daemon/Agent User Space Buffer Kernel Space eBPF programs output to a buffer process_execution
    [Show full text]
  • Implementing SCADA Security Policies Via Security-Enhanced Linux
    Implementing SCADA Security Policies via Security-Enhanced Linux Ryan Bradetich Schweitzer Engineering Laboratories, Inc. Paul Oman University of Idaho, Department of Computer Science Presented at the 10th Annual Western Power Delivery Automation Conference Spokane, Washington April 8–10, 2008 1 Implementing SCADA Security Policies via Security-Enhanced Linux Ryan Bradetich, Schweitzer Engineering Laboratories, Inc. Paul Oman, University of Idaho, Department of Computer Science Abstract—Substation networks have traditionally been and corporate IT networks are also fundamentally different. isolated from corporate information technology (IT) networks. SCADA protocols provide efficient, deterministic communi- Hence, the security of substation networks has depended heavily cations between devices [3] [4]. Corporate IT protocols upon limited access points and the use of point-to-point supervisory control and data acquisition (SCADA)-specific generally provide reliable communications over a shared protocols. With the introduction of Ethernet into substations, communications channel. pressure to reduce expenses and provide Internet services to Security-Enhanced Linux was initially developed by the customers has many utilities connecting their substation National Security Agency to introduce a mandatory access networks and corporate IT networks, despite the additional control (MAC) solution into the Linux® kernel [5]. The security risks. The Security-Enhanced Linux SCADA proxy was original proof-of-concept Security-Enhanced Linux SCADA introduced
    [Show full text]