Checkpoint Restore in Userspace

Total Page:16

File Type:pdf, Size:1020Kb

Checkpoint Restore in Userspace Checkpoint Restore In Userspace SONG, CHANGAN(Leo) APAC Technical Account Manager, Customer Experience & Engagement, Strategic Customer Engagement, Red Hat 1 RED HAT ENTERPRISE LINUX 7 CRIU Checkpoint / Restore In User space • 프로세스의 현재 상태 저장 • 이전 상태 복원 기능 (checkpoint 전으로 ) • Checkpoint 된 프로세스의 모든 정보는 하나이상의 이미지 파일로 저장 됨 ( 저장정보 : memory pages, file descriptors, inter-process communication, and so on) • 같은 시스템 또는 다른 시스템에 프로세스 복원 • 컨테이너 라이브 마이그레이션 같은 용도로 사용됨 • RHEL7.3 에 패키지가 포함 (criu 2.3) • Tech-preview 기능으로 등록 https://access.redhat.com/articles/2455211 2 RED HAT ENTERPRISE LINUX 7 CRIU how does it works? Image files Kernel objects Process tree 001101 001101 101010 101010 110001 110001 011010 011010 000011 000011 010101 010101 Files 001101 001101 101010 101010 110001 110001 011010 011010 Sockets criu 000011 000011 010101 010101 Pipes 001101 001101 101010 101010 110001 110001 011010 011010 000011 000011 010101 010101 Namespaces 3 RED HAT ENTERPRISE LINUX 7 CRIU how does it works? Kernel interfaces /proc/ ptrace Dump Restore syscalls netlink 4 RED HAT ENTERPRISE LINUX 7 CRIU Dump Parasite code Receive file descriptors Dump memory content Prctl(), sigaction, pending signals, timers, etc. Ptrace freeze processes Inject a parasite code Netlink Get information about sockets, netns Procfs /proc/PID/maps, /proc/PID/map_files/, /proc/PID/status, /proc/PID/mountinfo 5 RED HAT ENTERPRISE LINUX 7 CRIU Restore Collect shared objects Namespaces Restore name-spaces Create a process tree Restore SID, PGID Processes Restore objects, which should be inherited Files, sockets, pipes, ... Restore per-task properties. Restore memory Call sigreturn Awesome 6 RED HAT ENTERPRISE LINUX 7 CRIU Interest moment How to restore shared objects? Send file descriptors via unix sockets Map files from /proc/self/map_files/ for restoring anon shared mappings How to restore memory mappings on the correct places? Map a new code block and a stack Unmap crtools' mappings Remap task's mappings on the correct places How to resume a process? Create a signal frame Call sigreturn() 7 RED HAT ENTERPRISE LINUX 7 CRIU Birth of CR • HPC 환경을 위해 개발 • 하나의 어플리케이션이 수백 , 수천 코어에 분산되어 실행되는 환경에 적합 • 특히 어플리케이션이 실패할 경우 , 전체 CPU 사용된 것이 쓸모없게 되고 데이터도 손실되는 약점을 CRIU 로 해소 • 어플리케이션과의 호환성 검토 필요 • 초기에는 관심받지 못하다가 container migration 으로 각광 8 RED HAT ENTERPRISE LINUX 7 CRIU Limitations • Inter-process-communication(IPC) 을 이용하여 checkpoint /restore 동작이 가능 . • 항상 부모 프로세서와 모든 자식 프로세서 checkpoint/restores 에 대 해서 가능 . • PID 항상 같아야 하며 , 시스템에서 이미 사용하는 PID 가 있는 경 우 , CRIU 를 이용한 프로세스 복구 단계에서 실패 . https://criu.org/What_cannot_be_checkpointed 9 RED HAT ENTERPRISE LINUX 7 CRIU Live migration Host A Host B Pre-migrate memory with memory tracker Shared FS http://criu.org/P.Haul 10 RED HAT ENTERPRISE LINUX 7 CRIU Load balancing on cluster Host A Host B Host C 11 RED HAT ENTERPRISE LINUX 7 CRIU Power saving on cluster Host A Host B Host C 12 RED HAT ENTERPRISE LINUX 7 CRIU Node maintenance Host A Host B 13 RED HAT ENTERPRISE LINUX 7 CRIU Kernel upgrade w/p reboot Host Kexec Kernel AB 14 RED HAT ENTERPRISE LINUX 7 CRIU Slow services startup Service readiness Ready 100% Initialize resource pools Top-up caches Load config Spawn process T # service foo start time 15 RED HAT ENTERPRISE LINUX 7 CRIU Slow services startup Service readiness Ready 100% Spawn process t < T T time # service foo restore 16 RED HAT ENTERPRISE LINUX 7 CRIU Periodic snapshots Memory tracker helps to keep images smaller time 17 RED HAT ENTERPRISE LINUX 7 CRIU HPC Power failure time 0% 20% 40% 60% 60% 18 RED HAT ENTERPRISE LINUX 7 CRIU Advanced debugging Application in trouble Production Host Developer Host Debugger 19 RED HAT ENTERPRISE LINUX 7 CRIU Advanced testing ... New test or new hardware ? 20 RED HAT ENTERPRISE LINUX 7 CRIU Installation ciru package on RHEL7 # yum install criu -y ... Dependencies Resolved ============================================================================================= Package Arch Version Repository Size ============================================================================================= Installing: criu x86_64 2.3-2.el7 rhel-7-server-rpms 349 k Installing for dependencies: protobuf-c x86_64 1.0.2-3.el7 rhel-7-server-rpms 28 k … # ldd `which criu` linux-vdso.so.1 => (0x00007ffed554d000) librt.so.1 => /lib64/librt.so.1 (0x00007f5fd0faf000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f5fd0d93000) libprotobuf-c.so.1 => /lib64/libprotobuf-c.so.1 (0x00007f5fd0b89000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f5fd0985000) libnl-3.so.200 => /lib64/libnl-3.so.200 (0x00007f5fd0764000) libc.so.6 => /lib64/libc.so.6 (0x00007f5fd03a2000) /lib64/ld-linux-x86-64.so.2 (0x00007f5fd11bd000) libm.so.6 => /lib64/libm.so.6 (0x00007f5fd00a0000) 21 RED HAT ENTERPRISE LINUX 7 CRIU How to Use 1) criu on command 2) criu in runc - store checkpoint container # criu --help # runc checkpoint <container name> Usage: For example, criu dump|pre-dump -t PID [<options>] # runc checkpoint rhel7-httpd criu restore [<options>] criu check [--feature FEAT] criu exec -p PID <syscall-string> criu page-server - restore checkpoint container criu service [<options>] criu dedup # runc restore -d <container name> ... For example, # runc restore -d rhel7-httpd http://rhelblog.redhat.com/2016/12/08/container-live-migration-using-runc-and-criu/ 22 RED HAT ENTERPRISE LINUX 7 CRIU Demo in runc 23 RED HAT ENTERPRISE LINUX 7 CRIU Runc criu can now be used for following applications running in a Red Hat Enterprise Linux 7 runc container: vsftpd apache httpd sendmail postgresql mongodb mariadb mysql tomcat dnsmasq 24 RED HAT ENTERPRISE LINUX 7 25 RED HAT ENTERPRISE LINUX 7 THANK YOU 26 RED HAT ENTERPRISE LINUX 7 .
Recommended publications
  • Checkpoint and Restoration of Micro-Service in Docker Containers
    3rd International Conference on Mechatronics and Industrial Informatics (ICMII 2015) Checkpoint and Restoration of Micro-service in Docker Containers Chen Yang School of Information Security Engineering, Shanghai Jiao Tong University, China 200240 [email protected] Keywords: Lightweight Virtualization, Checkpoint/restore, Docker. Abstract. In the present days of rapid adoption of micro-service, it is imperative to build a system to support and ensure the high performance and high availability for micro-services. Lightweight virtualization, which we also called container, has the ability to run multiple isolated sets of processes under a single kernel instance. Because of the possibility of obtaining a low overhead comparable to the near-native performance of a bare server, the container techniques, such as openvz, lxc, docker, they are widely used for micro-service [1]. In this paper, we present the high availability of micro-service in containers. We investigate capabilities provided by container (docker, openvz) to model and build the Micro-service infrastructure and compare different checkpoint and restore technologies for high availability. Finally, we present preliminary performance results of the infrastructure tuned to the micro-service. Introduction Lightweight virtualization, named the operating system level virtualization technology, partitions the physical machines resource, creating multiple isolated user-space instances. Each container acts exactly like a stand-alone server. A container can be rebooted independently and have root access, users, IP address, memory, processes, files, etc. Unlike traditional virtualization with the hypervisor layer, containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary for containerization, including Linux with openvz, vserver and more recently lxc, Solaris with zones, and FreeBSD with Jails [2].
    [Show full text]
  • Flexible Lustre Management
    Flexible Lustre management Making less work for Admins ORNL is managed by UT-Battelle for the US Department of Energy How do we know Lustre condition today • Polling proc / sysfs files – The knocking on the door model – Parse stats, rpc info, etc for performance deviations. • Constant collection of debug logs – Heavy parsing for common problems. • The death of a node – Have to examine kdumps and /or lustre dump Origins of a new approach • Requirements for Linux kernel integration. – No more proc usage – Migration to sysfs and debugfs – Used to configure your file system. – Started in lustre 2.9 and still on going. • Two ways to configure your file system. – On MGS server run lctl conf_param … • Directly accessed proc seq_files. – On MSG server run lctl set_param –P • Originally used an upcall to lctl for configuration • Introduced in Lustre 2.4 but was broken until lustre 2.12 (LU-7004) – Configuring file system works transparently before and after sysfs migration. Changes introduced with sysfs / debugfs migration • sysfs has a one item per file rule. • Complex proc files moved to debugfs • Moving to debugfs introduced permission problems – Only debugging files should be their. – Both debugfs and procfs have scaling issues. • Moving to sysfs introduced the ability to send uevents – Item of most interest from LUG 2018 Linux Lustre client talk. – Both lctl conf_param and lctl set_param –P use this approach • lctl conf_param can set sysfs attributes without uevents. See class_modify_config() – We get life cycle events for free – udev is now involved. What do we get by using udev ? • Under the hood – uevents are collect by systemd and then processed by udev rules – /etc/udev/rules.d/99-lustre.rules – SUBSYSTEM=="lustre", ACTION=="change", ENV{PARAM}=="?*", RUN+="/usr/sbin/lctl set_param '$env{PARAM}=$env{SETTING}’” • You can create your own udev rule – http://reactivated.net/writing_udev_rules.html – /lib/udev/rules.d/* for examples – Add udev_log="debug” to /etc/udev.conf if you have problems • Using systemd for long task.
    [Show full text]
  • Reducing Power Consumption in Mobile Devices by Using a Kernel
    IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. Z, NO. B, AUGUST 2017 1 Reducing Event Latency and Power Consumption in Mobile Devices by Using a Kernel-Level Display Server Stephen Marz, Member, IEEE and Brad Vander Zanden and Wei Gao, Member, IEEE E-mail: [email protected], [email protected], [email protected] Abstract—Mobile devices differ from desktop computers in that they have a limited power source, a battery, and they tend to spend more CPU time on the graphical user interface (GUI). These two facts force us to consider different software approaches in the mobile device kernel that can conserve battery life and reduce latency, which is the duration of time between the inception of an event and the reaction to the event. One area to consider is a software package called the display server. The display server is middleware that handles all GUI activities between an application and the operating system, such as event handling and drawing to the screen. In both desktop and mobile devices, the display server is located in the application layer. However, the kernel layer contains most of the information needed for handling events and drawing graphics, which forces the application-level display server to make a series of system calls in order to coordinate events and to draw graphics. These calls interrupt the CPU which can increase both latency and power consumption, and also require the kernel to maintain event queues that duplicate event queues in the display server. A further drawback of placing the display server in the application layer is that the display server contains most of the information required to efficiently schedule the application and this information is not communicated to existing kernels, meaning that GUI-oriented applications are scheduled less efficiently than they might be, which further increases power consumption.
    [Show full text]
  • I.MX Linux® Reference Manual
    i.MX Linux® Reference Manual Document Number: IMXLXRM Rev. 1, 01/2017 i.MX Linux® Reference Manual, Rev. 1, 01/2017 2 NXP Semiconductors Contents Section number Title Page Chapter 1 About this Book 1.1 Audience....................................................................................................................................................................... 27 1.1.1 Conventions................................................................................................................................................... 27 1.1.2 Definitions, Acronyms, and Abbreviations....................................................................................................27 Chapter 2 Introduction 2.1 Overview.......................................................................................................................................................................31 2.1.1 Software Base................................................................................................................................................ 31 2.1.2 Features.......................................................................................................................................................... 31 Chapter 3 Machine-Specific Layer (MSL) 3.1 Introduction...................................................................................................................................................................37 3.2 Interrupts (Operation)..................................................................................................................................................
    [Show full text]
  • Networking with Wicked in SUSE® Linux Enterprise 12
    Networking with Wicked in SUSE® Linux Enterprise 12 Something Wicked This Way Comes Guide www.suse.com Solution Guide Server Server Solution Guide Networking with Wicked in SUSE Linux Enterprise 12 Wicked QuickStart Guide Abstract: Introduced with SUSE® Linux Enterprise 12, Wicked is the new network management tool for Linux, largely replacing the sysconfig package to manage the ever-more-complicated network configurations. Wicked provides network configuration as a service, enabling you to change your configuration dynamically. This paper covers the basics of Wicked with an emphasis on Recently, new technologies have accelerated the trend toward providing correlations between how things were done previ- complexity. Virtualization requires on-demand provisioning of ously and how they need to be done now. resources, including networks. Converged networks that mix data and storage traffic on a shared link require a tighter integra- Introduction tion between stacks that were previously mostly independent. When S.u.S.E.1 first introduced its Linux distribution, network- ing requirements were relatively simple and static. Over time Today, more than 20 years after the first SUSE distribution, net- networking evolved to become far more complex and dynamic. work configurations are very difficult to manage properly, let For example, automatic address configuration protocols such as alone easily (see Figure 1). DHCP or IPv6 auto-configuration entered the picture along with a plethora of new classes of network devices. Modern Network Landscape While this evolution was happening, the concepts behind man- aging a Linux system’s network configuration didn’t change much. The basic idea of storing a configuration in some files and applying it at system boot up using a collection of scripts and system programs was pretty much the same.
    [Show full text]
  • O'reilly Linux Kernel in a Nutshell.Pdf
    ,title.4229 Page i Friday, December 1, 2006 9:52 AM LINUX KERNEL IN A NUTSHELL ,title.4229 Page ii Friday, December 1, 2006 9:52 AM Other Linux resources from O’Reilly Related titles Building Embedded Linux Running Linux Systems Understanding Linux Linux Device Drivers Network Internals Linux in a Nutshell Understanding the Linux Linux Pocket Guide Kernel Linux Books linux.oreilly.com is a complete catalog of O’Reilly’s Resource Center books on Linux and Unix and related technologies, in- cluding sample chapters and code examples. Conferences O’Reilly brings diverse innovators together to nurture the ideas that spark revolutionary industries. We spe- cialize in documenting the latest tools and systems, translating the innovator’s knowledge into useful skills for those in the trenches. Visit conferences.oreilly.com for our upcoming events. Safari Bookshelf (safari.oreilly.com) is the premier on- line reference library for programmers and IT professionals. Conduct searches across more than 1,000 books. Subscribers can zero in on answers to time-critical questions in a matter of seconds. Read the books on your Bookshelf from cover to cover or sim- ply flip to the page you need. Try it today for free. ,title.4229 Page iii Friday, December 1, 2006 9:52 AM LINUX KERNEL IN A NUTSHELL Greg Kroah-Hartman Beijing • Cambridge • Farnham • Köln • Paris • Sebastopol • Taipei • Tokyo ,LKNSTOC.fm.8428 Page v Friday, December 1, 2006 9:55 AM Chapter 1 Table of Contents Preface . ix Part I. Building the Kernel 1. Introduction . 3 Using This Book 4 2. Requirements for Building and Using the Kernel .
    [Show full text]
  • Communicating Between the Kernel and User-Space in Linux Using Netlink Sockets
    SOFTWARE—PRACTICE AND EXPERIENCE Softw. Pract. Exper. 2010; 00:1–7 Prepared using speauth.cls [Version: 2002/09/23 v2.2] Communicating between the kernel and user-space in Linux using Netlink sockets Pablo Neira Ayuso∗,∗1, Rafael M. Gasca1 and Laurent Lefevre2 1 QUIVIR Research Group, Departament of Computer Languages and Systems, University of Seville, Spain. 2 RESO/LIP team, INRIA, University of Lyon, France. SUMMARY When developing Linux kernel features, it is a good practise to expose the necessary details to user-space to enable extensibility. This allows the development of new features and sophisticated configurations from user-space. Commonly, software developers have to face the task of looking for a good way to communicate between kernel and user-space in Linux. This tutorial introduces you to Netlink sockets, a flexible and extensible messaging system that provides communication between kernel and user-space. In this tutorial, we provide fundamental guidelines for practitioners who wish to develop Netlink-based interfaces. key words: kernel interfaces, netlink, linux 1. INTRODUCTION Portable open-source operating systems like Linux [1] provide a good environment to develop applications for the real-world since they can be used in very different platforms: from very small embedded devices, like smartphones and PDAs, to standalone computers and large scale clusters. Moreover, the availability of the source code also allows its study and modification, this renders Linux useful for both the industry and the academia. The core of Linux, like many modern operating systems, follows a monolithic † design for performance reasons. The main bricks that compose the operating system are implemented ∗Correspondence to: Pablo Neira Ayuso, ETS Ingenieria Informatica, Department of Computer Languages and Systems.
    [Show full text]
  • Evaluating and Improving LXC Container Migration Between
    Evaluating and Improving LXC Container Migration between Cloudlets Using Multipath TCP By Yuqing Qiu A thesis submitted to the Faculty of Graduate and Postdoctoral Affairs in partial fulfillment of the requirements for the degree of Master of Applied Science in Electrical and Computer Engineering Carleton University Ottawa, Ontario © 2016, Yuqing Qiu Abstract The advent of the Cloudlet concept—a “small data center” close to users at the edge is to improve the Quality of Experience (QoE) of end users by providing resources within a one-hop distance. Many researchers have proposed using virtual machines (VMs) as such service-provisioning servers. However, seeing the potentiality of containers, this thesis adopts Linux Containers (LXC) as Cloudlet platforms. To facilitate container migration between Cloudlets, Checkpoint and Restore in Userspace (CRIU) has been chosen as the migration tool. Since the migration process goes through the Wide Area Network (WAN), which may experience network failures, the Multipath TCP (MPTCP) protocol is adopted to address the challenge. The multiple subflows established within a MPTCP connection can improve the resilience of the migration process and reduce migration time. Experimental results show that LXC containers are suitable candidates for the problem and MPTCP protocol is effective in enhancing the migration process. i Acknowledgement I would like to express my sincerest gratitude to my principal supervisor Dr. Chung-Horng Lung who has provided me with valuable guidance throughout the entire research experience. His professionalism, patience, understanding and encouragement have always been my beacons of light whenever I go through difficulties. My gratitude also goes to my co-supervisor Dr.
    [Show full text]
  • The Aurora Operating System
    The Aurora Operating System Revisiting the Single Level Store Emil Tsalapatis Ryan Hancock Tavian Barnes RCS Lab, University of Waterloo RCS Lab, University of Waterloo RCS Lab, University of Waterloo [email protected] [email protected] [email protected] Ali José Mashtizadeh RCS Lab, University of Waterloo [email protected] ABSTRACT KEYWORDS Applications on modern operating systems manage their single level stores, transparent persistence, snapshots, check- ephemeral state in memory, and persistent state on disk. En- point/restore suring consistency between them is a source of significant developer effort, yet still a source of significant bugs inma- ACM Reference Format: ture applications. We present the Aurora single level store Emil Tsalapatis, Ryan Hancock, Tavian Barnes, and Ali José Mash- (SLS), an OS that simplifies persistence by automatically per- tizadeh. 2021. The Aurora Operating System: Revisiting the Single sisting all traditionally ephemeral application state. With Level Store. In Workshop on Hot Topics in Operating Systems (HotOS recent storage hardware like NVMe SSDs and NVDIMMs, ’21), June 1-June 3, 2021, Ann Arbor, MI, USA. ACM, New York, NY, Aurora is able to continuously checkpoint entire applications USA, 8 pages. https://doi.org/10.1145/3458336.3465285 with millisecond granularity. Aurora is the first full POSIX single level store to han- dle complex applications ranging from databases to web 1 INTRODUCTION browsers. Moreover, by providing new ways to interact with Single level storage (SLS) systems provide persistence of and manipulate application state, it enables applications to applications as an operating system service. Their advantage provide features that would otherwise be prohibitively dif- lies in removing the semantic gap between the in-memory ficult to implement.
    [Show full text]
  • Checkpoint and Restore of Singularity Containers
    Universitat politecnica` de catalunya (UPC) - BarcelonaTech Facultat d'informatica` de Barcelona (FIB) Checkpoint and restore of Singularity containers Grado en ingenier´ıa informatica´ Tecnolog´ıas de la Informacion´ Memoria 25/04/2019 Director: Autor: Jordi Guitart Fernandez Enrique Serrano G´omez Departament: Arquitectura de Computadors 1 Abstract Singularity es una tecnolog´ıade contenedores software creada seg´unlas necesidades de cient´ıficos para ser utilizada en entornos de computaci´onde altas prestaciones. Hace ya 2 a~nosdesde que los usuarios empezaron a pedir una integraci´onde la fun- cionalidad de Checkpoint/Restore, con CRIU, en contenedores Singularity. Esta inte- graci´onayudar´ıaen gran medida a mejorar la gesti´onde los recursos computacionales de las m´aquinas. Permite a los usuarios guardar el estado de una aplicaci´on(ejecut´andose en un contenedor Singularity) para poder restaurarla en cualquier momento, sin perder el trabajo realizado anteriormente. Por lo que la posible interrupci´onde una aplicaci´on, debido a un fallo o voluntariamente, no es una p´erdidade tiempo de computaci´on. Este proyecto muestra como es posible realizar esa integraci´on. Singularity ´esuna tecnologia de contenidors software creada segons les necessitats de cient´ıfics,per ser utilitzada a entorns de computaci´od'altes prestacions. Fa 2 anys desde que els usuaris van comen¸cara demanar una integraci´ode la funcional- itat de Checkpoint/Restore, amb CRIU, a contenidors Singularity. Aquesta integraci´o ajudaria molt a millorar la gesti´odels recursos computacionals de les m`aquines.Permet als usuaris guardar l'estat d'una aplicaci´o(executant-se a un contenidor Singularity) per poder restaurar-la en qualsevol moment, sense perdre el treball realitzat anteriorment.
    [Show full text]
  • Beancounters
    Resource Management: Beancounters Pavel Emelianov Denis Lunev Kirill Korotaev [email protected] [email protected] [email protected] July 31, 2007 Abstract The paper outlines various means of resource management available in the Linux kernel, such as per-process limits (the setrlimit(2) interface), shows their shortcomings, and illustrares the need for another resource control mechanism: beancounters. Beancounters are a set of per-process group parameters (proposed and implemented by Alan Cox and Andrey Savochkin and further developed for OpenVZ) which can be used with or without containers. Beancounters history, architecture, goals, efficiency, and some in-depth implementation details are given. 1 Current state of resource Again, most of these resource limits apply to a management in the Linux single process, which means, for example, that all the memory may be consumed by a single user run- kernel ning the appropriate number of processes. Setting the limits in such a way as to have the value of Currently the Linux kernel has only one way to multiplying the per-process limit by the number of control resource consumption of running processes processes staying below the available values is im- – it is UNIX-like resource limits (rlimits). practical. Rlimits set upper bounds for some resource us- age parameters. Most of these limits apply to a single process, except for the limit for the number 2 Beancounters of processes, which applies to a user. For some time now Linus has been accepting The main goal of these limits is to protect pro- patches adding the so-called namespaces into the cesses from an accidental misbehavior (like infi- kernel.
    [Show full text]
  • Postmodern Strace Dmitry Levin
    Postmodern strace Dmitry Levin Brussels, 2020 Traditional strace [1/30] Printing instruction pointer and timestamps print instruction pointer: -i option print timestamps: -r, -t, -tt, -ttt, and -T options Size and format of strings string size: -s option string format: -x and -xx options Verbosity of syscall decoding abbreviate output: -e abbrev=set, -v option dereference structures: -e verbose=set print raw undecoded syscalls: -e raw=set Traditional strace [2/30] Printing signals print signals: -e signal=set Dumping dump the data read from the specified descriptors: -e read=set dump the data written to the specified descriptors: -e write=set Redirecting output to files or pipelines write the trace to a file or pipeline: -o filename option write traces of processes to separate files: -ff -o filename Traditional strace [3/30] System call filtering trace only the specified set of system calls: -e trace=set System call statistics count time, calls, and errors for each system call: -c option sort the histogram printed by the -c option: -S sortby option Tracing control attach to existing processes: -p pid option trace child processes: -f option Modern strace [4/30] Tracing output format pathnames accessed by name or descriptor: -y option network protocol associated with descriptors: -yy option stack of function calls: -k option System call filtering pathnames accessed by name or descriptor: -P option regular expressions: -e trace=/regexp optional specifications: -e trace=?spec new syscall classes: %stat, %lstat, %fstat, %statfs, %fstatfs, %%stat, %%statfs
    [Show full text]