Systemd-AFV.Pdf

Total Page:16

File Type:pdf, Size:1020Kb

Systemd-AFV.Pdf Facultade de Informática ADMINISTRACIÓN DE SISTEMAS OPERATIVOS GRADO EN INGENIERÍA INFORMÁTICA MENCIÓN EN TECNOLOGÍAS DE LA INFORMACIÓN SYSTEMD Nombre del grupo: AFV Estudiante 1: Sara Fernández Martínez email 1: [email protected] Estudiante 2: Andrés Fernández Varela email 2: [email protected] Estudiante 3: Javier Taboada Núñez email 3: [email protected] Estudiante 4: Alejandro José Fernández Esmorís email 4: [email protected] Estudiante 5: Luis Pita Romero email 5: [email protected] A Coruña, mayo de 2021. Índice general 1 Introducción 1 1.1 ¿Qué es un sistema init? ................................... 1 1.2 Necesidad de una alternativa ................................. 1 2 ¿Qué es systemd? 2 2.1 Un poco de historia ...................................... 2 3 Units 5 4 Compatibilidad de systemd con SysV 6 5 Utilities 7 6 Systemctl 12 7 Systemd-boot: una alternativa a GRUB 15 8 Ventajas y Desventajas de Systemd 16 8.1 Ventajas ............................................ 16 8.1.1 Principales ventajas ................................. 16 8.1.2 Más en profundidad ................................. 16 8.2 Desventajas .......................................... 18 8.2.1 Principales desventajas ................................ 18 8.2.2 Más en profundidad ................................. 18 9 Conclusiones 20 Bibliografía 21 i Índice de figuras 2.1 Ejemplo ejecución machinectl shell. ............................. 3 2.2 Ejemplo ejecución systemd-analyze. ............................. 3 2.3 Ejemplo ejecución systemd-analyze-blame. ......................... 4 2.4 Ejemplo ejecución systemd-analyze-critical-chain. ..................... 4 5.1 journalctl ............................................ 7 5.2 journalctl opcion -r ...................................... 8 5.3 journalctl opcion -f ...................................... 8 5.4 journalctl opcion -nx ..................................... 9 5.5 journalctl opcion –since –until ................................ 9 5.6 journalctl opcion UID ..................................... 9 5.7 journalctl opcion -u ...................................... 10 5.8 journalctl opcion -b ...................................... 10 5.9 journalctl opcion -k ...................................... 11 5.10 hostnamectl .......................................... 11 6.1 Versión systemctl. ....................................... 12 6.2 Listar unidades. ........................................ 12 6.3 Unidad activa?. ......................................... 13 6.4 Unidades fallidas. ....................................... 13 6.5 Estado de una unidad. ..................................... 13 ii Capítulo 1 Introducción 1.1 ¿Qué es un sistema init? Cuando se inicia una máquina Linux, se ejecuta un código incorporado, el cuál es cargado desde la BIOS o UEFI, seguido del bootloader, que carga un kernel de Linux según la configuración que tenga. El kernel carga los controladores e inicia init como primer proceso (con PID 1). El diseño de init tuvo una discrepancia entre los sistemas Unix System V y BSD. En el esquema System V se usa /etc/inittab para establecer los runlevels disponibles, los cuales determinan el nivel de ejecución del sistema. Cada runlevel determina el subconjunto de servicios o daemons que serán iniciados durante el inicio del sistema, aunque el runlevel 0 y el 6 estan reservados para apagar y reiniciar el sistema respectivamente. En el init de BSD se ejecuta el script de inicialización /etc/rc y no existen runlevels sino que dicho archivo /etc/rc determina qué servicios se inician. La ventaja de este sistema es que es simple y fácil de mantener pero es propenso a errores dado que un mínimo error en el script puede detener el inicio del sistema. 1.2 Necesidad de una alternativa El sistema init tenía limitaciones en su diseño, como por ejemplo que los servicios se iniciaban de forma secuencial, lo que provocaba largos tiempos de inicio del sistema, o que si un servicio fallaba el proceso de inicio se quedaba bloqueado. Estos ejemplos sumados otros defectos de diseño motivaron la creación de systemd, que fue pensado para mejorar la eficiencia del proceso de inicio init de System V. [1][2] 1 Capítulo 2 ¿Qué es systemd? Tal y como lo definen sus autores, se trata de un ”bloque de construcción básico” para unsistema operativo, en una definición un poco más formal, es un conjunto de daemons de administración de sistema, bibliotecas y herramientas que fueron diseñados para interactuar con el núcleo del sistema operativo GNU/Linux. Es el primer proceso en el arranque de Linux que se ejecuta en el espacio de usuario, por lo que también se considera el proceso padre de todos los procesos hijos que existen en dicho espacio. Es importante resaltar que se trata de un software libre y de código abierto y que uno de sus principales objetivos es unificar configuraciones básicas de Linux y los comportamientos de servicios en todaslas distribuciones. [3] 2.1 Un poco de historia Systemd fue desarrollado en un inicio por Lennart Poettering y Kay Sievers, entre otros, ambos son ingenieros de software que trabajaban para Red Hat e iniciaron el desarrollo de systemd enel2010, con él intentaron superar las eficiencias de daemon de inico de varias formas: • Mejorar el marco de software para expresar las dependencias. • Permitir que se realicen un mayor número de procesamientos en paralelo durante el arranque del sistema. • Reducir la sobrecarga computacional del shell. A lo largo de los años systemd se ha convertido en el sistema de inicio predeterminado de dis- tintas distribuciones de Linux, pero destacar que fue Fedora, en el año 2011, la primera distribución importante de Linux en habilitar systemd de manera predeterminada. 2 CAPÍTULO 2. ¿QUÉ ES SYSTEMD? Para terminar mencionar que en 2015 systemd comenzó a proporcionar un shell de inicio de se- sión, invocable mediante machinectl shell 2.1 y a lo largo de años posteriores se han ido detectando diversos bugs de seguridad que permitían realizar ataques DoS 1 contra systemd. Figura 2.1: Ejemplo ejecución machinectl shell. A continuación mostramos algunos ejemplos de ejecución con el comando ”systemd”. En esta primera figura 2.2 vemos como al utilizar el comando systemd-analyze nos muestra infor- mación de las estadísticas de rendimiento de arranque del sistema. Figura 2.2: Ejemplo ejecución systemd-analyze. Ahora, en la siguiente imagen 2.3 al utilizar el comando systemd-analyze blame nos imprime una lista de todas las unidades en ejecución, ordenadas por el tiempo que tardaron en inicializarse. 1 DoS: Denial of Service 3 CAPÍTULO 2. ¿QUÉ ES SYSTEMD? Figura 2.3: Ejemplo ejecución systemd-analyze-blame. Y por último con el uso del comando systemd-analyze critical-chain en la figura 2.4 se observa como imprime un árbol de la cadena de unidades de tiempo crítico(para cada una de las UNIDADES especificadas o para el objetivo predeterminado en caso contrario). Además después del caracter ’@’, se imprime El tiempo después de que la unidad esté activa o iniciada. Figura 2.4: Ejemplo ejecución systemd-analyze-critical-chain. 4 Capítulo 3 Units Todo lo que es gestionado por systemd se llama ”unit” y cada una de ellas es descrita por un archivo de configuración propio que tendrá una extensión distinta conforme al tipo de unidad tratada[4]: • .service: fichero que describe la configuración de un demonio. • .socket: describe la configuración de un socket(tipo UNIX o TCP/IP) asociada a un .service. • .device: describe un dispositivo hardware reconocido por el kernel que esté gestionado por systemd. • .mount: describe un punto de montaje gestionado por systemd. • .automount:describe un punto de automontaje asociado a un .mount. • .swap: describe una partición o archivo de intercambio gestionado por systemd. • .target: describe un grupo de Units. • .path: describe una carpeta o archivo monitorizado por la API Inotify delk kernel. • .timer: describe la temporización de una tarea programada con el programador systemd. • .slice: describe un grupo de Units asociadas a procesos para administrar y limitar los recursos comunes (CPU, memoria…). Además utiliza internamente los ”cgroups” del kernel. Los archivos de configuración de las Units pueden estar repartidos en tres carpetas distintas: • /usr/lib/systemd/system : Para units proporcionadas por los paquetes instalados en el sistema. • /run/systemd/system : Para units generadas en tiempo real durante la ejecución del sistema. Son no persistentes. • /etc/systemd/system : Para units proporcionadas por el administrador del sistema. 5 Capítulo 4 Compatibilidad de systemd con SysV Systemd, al igual que Upstart(reemplazo basado en eventos para el daemon init, método utilizado por varios sistemas operativos Unix-like para realizar tareas durante el arranque del sistema) ofrece compatibilidad con SysV (comando service y chkconfig) para aquellos servicios que aún soportan o funcionan únicamente con scripts de inicio SysV (aunque actualmente no hay muchos servicios que lo hagan)[5]. Upstart pese a mantener compatibilidad con los comandos service y chkconf de SysV implementó su propia utilidad para la administración de servicios, de igual modo systemd lo hace con la herramienta systemctl. SysV habilitaba servicios con chkconfig (o reproducía listas de estos para ver cuál de ellos se ejecutaba al arranque), algo que ahora bajo systemd podemos hacer con los siguientes comandos: Para habilitar el servicio httpd al arranque del sistema: • systemctl enable httpd.service Para listar todas las unidades de servicios instaladas: • systemctl list-unit-files
Recommended publications
  • UEFI Porting Update for ARM Platforms
    UEFI Porting Update for ARM Platforms What did we do since July? presented by Leif Lindholm UEFI tech lead Linaro Enterprise Group UEFI Plugfest – May 2014 Agenda Introduction Linux Support EDK2 Development SCT Platforms Tree Future Work The UEFI Forum www.uefi.org 2 Introduction Last year, Grant Likely presented an overview of the types of things we were planning to do. Since then, we have done various things... The UEFI Forum www.uefi.org 3 Linux Support The UEFI Forum www.uefi.org 4 Linux Support The big topic since the last Linaro Connect event has been Linux support: Runtime Services UEFI stub loader Co-existence of ACPI/FDT The UEFI Forum www.uefi.org 5 Runtime Services Runtime services support for both 32-bit and 64-bit ARM Not yet upstream, but aiming for Linux kernel v3.16 This included reworking of common code, since some functionality was already duplicated between x86/ia64 Main purpose is enabling GetVariable()/SetVariable() Enables bootloader installer to add boot entries, and update bootorder, from within Linux The UEFI Forum www.uefi.org 6 Linux UEFI stub loader The “stub loader” is the mechanism by which the Linux kernel can be loaded directly from UEFI, as a UEFI application Becoming the default mechanism on x86(/x64) platforms. Enables the use of more light-weight bootloaders, like gummiboot or rEFInd Not yet upstream, but aiming for 3.16 Majority of code shared between arm/arm64, and a fair bit shared with x86 The UEFI Forum www.uefi.org 7 ACPI vs. FDT A slightly contentious issue with existing ARM Linux developers
    [Show full text]
  • Flexible Lustre Management
    Flexible Lustre management Making less work for Admins ORNL is managed by UT-Battelle for the US Department of Energy How do we know Lustre condition today • Polling proc / sysfs files – The knocking on the door model – Parse stats, rpc info, etc for performance deviations. • Constant collection of debug logs – Heavy parsing for common problems. • The death of a node – Have to examine kdumps and /or lustre dump Origins of a new approach • Requirements for Linux kernel integration. – No more proc usage – Migration to sysfs and debugfs – Used to configure your file system. – Started in lustre 2.9 and still on going. • Two ways to configure your file system. – On MGS server run lctl conf_param … • Directly accessed proc seq_files. – On MSG server run lctl set_param –P • Originally used an upcall to lctl for configuration • Introduced in Lustre 2.4 but was broken until lustre 2.12 (LU-7004) – Configuring file system works transparently before and after sysfs migration. Changes introduced with sysfs / debugfs migration • sysfs has a one item per file rule. • Complex proc files moved to debugfs • Moving to debugfs introduced permission problems – Only debugging files should be their. – Both debugfs and procfs have scaling issues. • Moving to sysfs introduced the ability to send uevents – Item of most interest from LUG 2018 Linux Lustre client talk. – Both lctl conf_param and lctl set_param –P use this approach • lctl conf_param can set sysfs attributes without uevents. See class_modify_config() – We get life cycle events for free – udev is now involved. What do we get by using udev ? • Under the hood – uevents are collect by systemd and then processed by udev rules – /etc/udev/rules.d/99-lustre.rules – SUBSYSTEM=="lustre", ACTION=="change", ENV{PARAM}=="?*", RUN+="/usr/sbin/lctl set_param '$env{PARAM}=$env{SETTING}’” • You can create your own udev rule – http://reactivated.net/writing_udev_rules.html – /lib/udev/rules.d/* for examples – Add udev_log="debug” to /etc/udev.conf if you have problems • Using systemd for long task.
    [Show full text]
  • Version 7.8-Systemd
    Linux From Scratch Version 7.8-systemd Created by Gerard Beekmans Edited by Douglas R. Reno Linux From Scratch: Version 7.8-systemd by Created by Gerard Beekmans and Edited by Douglas R. Reno Copyright © 1999-2015 Gerard Beekmans Copyright © 1999-2015, Gerard Beekmans All rights reserved. This book is licensed under a Creative Commons License. Computer instructions may be extracted from the book under the MIT License. Linux® is a registered trademark of Linus Torvalds. Linux From Scratch - Version 7.8-systemd Table of Contents Preface .......................................................................................................................................................................... vii i. Foreword ............................................................................................................................................................. vii ii. Audience ............................................................................................................................................................ vii iii. LFS Target Architectures ................................................................................................................................ viii iv. LFS and Standards ............................................................................................................................................ ix v. Rationale for Packages in the Book .................................................................................................................... x vi. Prerequisites
    [Show full text]
  • Scalability of VM Provisioning Systems
    Scalability of VM Provisioning Systems Mike Jones, Bill Arcand, Bill Bergeron, David Bestor, Chansup Byun, Lauren Milechin, Vijay Gadepally, Matt Hubbell, Jeremy Kepner, Pete Michaleas, Julie Mullen, Andy Prout, Tony Rosa, Siddharth Samsi, Charles Yee, Albert Reuther Lincoln Laboratory Supercomputing Center MIT Lincoln Laboratory Lexington, MA, USA Abstract—Virtual machines and virtualized hardware have developed a technique based on binary code substitution been around for over half a century. The commoditization of the (binary translation) that enabled the execution of privileged x86 platform and its rapidly growing hardware capabilities have (OS) instructions from virtual machines on x86 systems [16]. led to recent exponential growth in the use of virtualization both Another notable effort was the Xen project, which in 2003 used in the enterprise and high performance computing (HPC). The a jump table for choosing bare metal execution or virtual startup time of a virtualized environment is a key performance machine execution of privileged (OS) instructions [17]. Such metric for high performance computing in which the runtime of projects prompted Intel and AMD to add the VT-x [19] and any individual task is typically much shorter than the lifetime of AMD-V [18] virtualization extensions to the x86 and x86-64 a virtualized service in an enterprise context. In this paper, a instruction sets in 2006, further pushing the performance and methodology for accurately measuring the startup performance adoption of virtual machines. on an HPC system is described. The startup performance overhead of three of the most mature, widely deployed cloud Virtual machines have seen use in a variety of applications, management frameworks (OpenStack, OpenNebula, and but with the move to highly capable multicore CPUs, gigabit Eucalyptus) is measured to determine their suitability for Ethernet network cards, and VM-aware x86/x86-64 operating workloads typically seen in an HPC environment.
    [Show full text]
  • Container and Kernel-Based Virtual Machine (KVM) Virtualization for Network Function Virtualization (NFV)
    Container and Kernel-Based Virtual Machine (KVM) Virtualization for Network Function Virtualization (NFV) White Paper August 2015 Order Number: 332860-001US YouLegal Lines andmay Disclaimers not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting: http://www.intel.com/ design/literature.htm. Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http:// www.intel.com/ or from the OEM or retailer. Results have been estimated or simulated using internal Intel analysis or architecture simulation or modeling, and provided to you for informational purposes. Any differences in your system hardware, software or configuration may affect your actual performance. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks. Tests document performance of components on a particular test, in specific systems.
    [Show full text]
  • Architectural Decisions for Linuxone Hypervisors
    July 2019 Webcast Virtualization options for Linux on IBM Z & LinuxONE Richard Young Executive IT Specialist Virtualization and Linux IBM Systems Lab Services Wilhelm Mild IBM Executive IT Architect for Mobile, IBM Z and Linux IBM R&D Lab, Germany Agenda ➢ Benefits of virtualization • Available virtualization options • Considerations for virtualization decisions • Virtualization options for LinuxONE & Z • Firmware hypervisors • Software hypervisors • Software Containers • Firmware hypervisor decision guide • Virtualization decision guide • Summary 2 © Copyright IBM Corporation 2018 Why do we virtualize? What are the benefits of virtualization? ▪ Simplification – use of standardized images, virtualized hardware, and automated configuration of virtual infrastructure ▪ Migration – one of the first uses of virtualization, enable coexistence, phased upgrades and migrations. It can also simplify hardware upgrades by make changes transparent. ▪ Efficiency – reduced hardware footprints, better utilization of available hardware resources, and reduced time to delivery. Reuse of deprovisioned or relinquished resources. ▪ Resilience – run new versions and old versions in parallel, avoiding service downtime ▪ Cost savings – having fewer machines translates to lower costs in server hardware, networking, floor space, electricity, administration (perceived) ▪ To accommodate growth – virtualization allows the IT department to be more responsive to business growth, hopefully avoiding interruption 3 © Copyright IBM Corporation 2018 Agenda • Benefits of
    [Show full text]
  • Amazon Workspaces Guia De Administração Amazon Workspaces Guia De Administração
    Amazon WorkSpaces Guia de administração Amazon WorkSpaces Guia de administração Amazon WorkSpaces: Guia de administração Copyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved. As marcas comerciais e imagens de marcas da Amazon não podem ser usadas no contexto de nenhum produto ou serviço que não seja da Amazon, nem de qualquer maneira que possa gerar confusão entre os clientes ou que deprecie ou desprestigie a Amazon. Todas as outras marcas comerciais que não pertencem à Amazon pertencem a seus respectivos proprietários, que podem ou não ser afiliados, patrocinados pela Amazon ou ter conexão com ela. Amazon WorkSpaces Guia de administração Table of Contents O que é WorkSpaces? ........................................................................................................................ 1 Features .................................................................................................................................... 1 Architecture ............................................................................................................................... 1 Acesse o WorkSpace .................................................................................................................. 2 Pricing ...................................................................................................................................... 3 Como começar a usar ................................................................................................................. 3 Conceitos básicos: Instalação
    [Show full text]
  • Linux Kernel Parameters
    LINUX KERNEL PARAMETERS LINUX KERNEL PARAMETERS Dip your toe into the mysterious heart of your Linux machine, with Andrew Conway and the magic of Linux kernel parameters. he dark days when new computer hardware should test it using a venerable utility called often required compiling your own kernel memtest86+. Many distros include it as an option at Tare now firmly in Linux’s past (though those the boot prompt, or you can put a distro such as were fun days). But the fact that Linux – meaning GPartedLive on a USB stick and select the the kernel itself – is free software means that you can memtest86+ option. The test will tell you if you’ve got still delve deep into its innards and tweak it to your bad memory and exactly where it is bad. In our case, heart’s content. the bad patch was reported as 2056.0MB to In an ideal world, the user would never need to think 2176.0MB. The solution was to restart the laptop, and about the kernel, but there are cases where it’s useful. when the bootloader began, switch to its command In fact, it becomes a line and set the necessity when memmap kernel hardware is faulty, “In an ideal world the user would parameter with such as with bad never need to think about the memmap=256M$2048M memory, or if it is This instructs the shipped with buggy kernel, but sometimes we have to.” kernel not to use the firmware that tells lies 256MB of memory about capabilities such as CPU speed or temperature.
    [Show full text]
  • Yocto Project Development Manual [ from the Yocto Project Website
    Scott Rifenbark, Intel Corporation <[email protected]> by Scott Rifenbark Copyright © 2010-2014 Linux Foundation Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-Share Alike 2.0 UK: England & Wales [http://creativecommons.org/licenses/by-sa/2.0/uk/] as published by Creative Commons. Note For the latest version of this manual associated with this Yocto Project release, see the Yocto Project Development Manual [http://www.yoctoproject.org/docs/1.7/dev-manual/dev-manual.html] from the Yocto Project website. Table of Contents 1. The Yocto Project Development Manual .................................................................................. 1 1.1. Introduction ................................................................................................................. 1 1.2. What This Manual Provides .......................................................................................... 1 1.3. What this Manual Does Not Provide ............................................................................. 1 1.4. Other Information ........................................................................................................ 2 2. Getting Started with the Yocto Project .................................................................................... 4 2.1. Introducing the Yocto Project ....................................................................................... 4 2.2. Getting Set Up ...........................................................................................................
    [Show full text]
  • Free Gnu Linux Distributions
    Free gnu linux distributions The Free Software Foundation is not responsible for other web sites, or how up-to-date their information is. This page lists the GNU/Linux distributions that are ​Linux and GNU · ​Why we don't endorse some · ​GNU Guix. We recommend that you use a free GNU/Linux system distribution, one that does not include proprietary software at all. That way you can be sure that you are. Canaima GNU/Linux is a distribution made by Venezuela's government to distribute Debian's Social Contract states the goal of making Debian entirely free. The FSF is proud to announce the newest addition to our list of fully free GNU/Linux distributions, adding its first ever small system distribution. Trisquel, Kongoni, and the other GNU/Linux system distributions on the FSF's list only include and only propose free software. They reject. The FSF's list consists of ready-to-use full GNU/Linux systems whose developers have made a commitment to follow the Guidelines for Free. GNU Linux-libre is a project to maintain and publish % Free distributions of Linux, suitable for use in Free System Distributions, removing. A "live" distribution is a Linux distribution that can be booted The portability of installation-free distributions makes them Puppy Linux, Devil-Linux, SuperGamer, SliTaz GNU/Linux. They only list GNU/Linux distributions that follow the GNU FSDG (Free System Distribution Guidelines). That the software (as well as the. Trisquel GNU/Linux is a fully free operating system for home users, small making the distro more reliable through quicker and more traceable updates.
    [Show full text]
  • Systemd and Linux Watchdog
    systemd and Linux Watchdog Run a program at... login? = .profile file boot? = systemd What to do if software locks up? 21-4-11 CMPT 433 Slides #14 © Dr. B. Fraser 1 systemd ● systemd used by most Linux distros as first user- space application to be run by the kernel. – 'd' means daemon: ... – Use systemd to run programs at boot (and many other things). 21-4-11 2 Jack of All Trades 21-4-11 https://www.zdnet.com/article/linus-torvalds-and-others-on- linuxs-systemd/ 3 systemd ● Replaces old “init” system: – Manages dependencies and allows concurrency when starting up applications – Does many things: login, networking, mounting, etc ● Controversy – Violates usual *nix philosophy of do one thing well. http://www.zdnet.com/article/linus-torvalds-and-others-on-linuxs-systemd/ – Some lead developers are said to have a bad attitude towards fixing “their” bugs. ● It's installed on the Beaglebone, so we'll use it! – Copy your code to BBG's eMMC (vs run over NFS). 21-4-11 4 Create a systemd service Assume 11-HttpsProcTimer ● Setup .service file: example installed to /opt/ (bbg)$ cd /lib/systemd/system (bbg)$ sudo nano foo.service [Unit] Description=HTTPS server to view /proc on port 8042 Use [Service] absolute User=root paths WorkingDirectory=/opt/10-HttpsProcTimer-copy/ ExecStart=/usr/bin/node /opt/10-HttpsProcTimer-copy/server.js SyslogIdentifier=HttpsProcServer [Install] WantedBy=multi-user.target 21-4-11 5 Controlling a Service ● Configure to run at startup (bbg)$ systemctl enable foo.service ● Manually Starting/Stopping Demo: Browse to (bbg)$ systemctl start foo.service https://192.168.7.2:3042 after reboot – Can replace start with stop or restart ● Status (bbg)$ systemctl status foo.service (bbg)$ journalctl -u foo.service (bbg)$ systemctl | grep HTTPS 21-4-11 6 Startup Script Suggestions ● If your app needs some startup steps, try a script: – copy app to file system (not running via NFS) – add 10s delay at startup ● I have found that some hardware configuration commands can fail if done too soon.
    [Show full text]
  • Limiting Ptrace on Production Linux Systems
    1 LIMITING PTRACE ON PRODUCTION LINUX SYSTEMS INTRODUCTION The Linux®2 kernel is the core component of a family of operating systems that underpin a large portion of government and commercial servers and infrastructure devices. Due to the prevalence of Linux systems in public and private infrastructure, ensuring system security by following community best practices to address current threats and risks is critical. In Linux, ptrace is a mechanism that allows one process to “trace” the execution of another process. The tracer is able to pause execution, and inspect and modify memory and registers in the tracee process: in short, the tracer maintains total control over the tracee. The legitimate use case for this functionality is debugging and troubleshooting. Utilities like strace and gdb use ptrace to perform their introspection duties. Not surprisingly, malicious implants sometimes use this functionality to steal secrets from another process or to force them into serving as proxies for anomalous behavior. PROPOSAL Production systems rarely need to use debugging utilities. For this reason, it is often safe to remove the ability to perform ptrace-related functions, at least in normal operational mode. The YAMA Linux Security Module, included in most Linux distributions, can be used to remove the ability for any process to ptrace another. To configure systems to automatically do this on boot, create a service file in /etc/systemd/system with the following contents: [Unit] Description=Removes, system-wide, the ability to ptrace ConditionKernelCommandLine=!maintenance [Service] Type=forking Execstart=/bin/bash –c “sysctl -w kernel.yama.ptrace_scope=3” Execstop= [Install] WantedBy=default.target Ensure that the service file created has read and execute permissions for the owner and group.
    [Show full text]