Leveraging Linux Kernel Tracing to Classify and Detail Application Bottlenecks

Total Page:16

File Type:pdf, Size:1020Kb

Leveraging Linux Kernel Tracing to Classify and Detail Application Bottlenecks MEng Individual Project Imperial College London Department of Computing Leveraging Linux kernel tracing to classify and detail application bottlenecks Supervisor: Author: Dr. Anthony Field Ashley J Davies-Lyons Second Marker: Dr. Giuliano Casale June 17, 2019 2 Abstract GAPP is a bottleneck identification tool that uses Linux kernel probes to identify periods of reduced parallelism in multithreaded programs. Although GAPP is effective at identifiying the lines of source code that lead to a bottleneck, it is unable to classify the type of bottleneck - for example whether it is due to lock contention or I/O. This project solves this problem by augmenting the stack traces generated by GAPP with classifications, and adds details of any files that were interacted with, or IP addresses that were interacted with. Additionally, by tracking kernel-level synchronisation (‘futex’) calls, we develop a lock analysis feature that assists with identifying particularly critical locks (and unlockers) in user applications. Further, we provide a summary of the most critical individual file and synchronisation actions. In the spirit of GAPP, we implement this without requiring instrumentation, and does not introduce any language or library dependencies. We find that our extended tool is able to reliably classify the different categories of bottleneck, and adds useful information to the GAPP output which is useful in diagnosing the root causes of a bottleneck. We verify this with two large open source projects - an image tracking benchmark, and a production game server. Finally, we find that the overhead we add is competitive with similar tools, and that our tool works correctly with alternative threading library - having evaluated with TBB and pthreads In addition to our main contributions, we additionally add a number of quality-of-life improvements to the tool, including a user interface to present the data, improved stack trace reporting, and easier methods of attaching to user applications. 2 Acknowledgements I am hugely thankful to my supervisor, Tony, for being consistently helpful, supportive, and available for meetings over the last year, in addition to being a great personal tutor for the past four years, and also to my co-supervisor, Reena, for being ever-willing to help with strange bugs on short notice, and whose extensive body of relevant knowledge and experience saved me an incalculable amount of debugging and stress. I’d also like to acknowledge my tremendous gratitude to my sixth form lecturers Gareth and Jonathan, whose eagle-eyed spotting of a mistake in my A-Level results rescued my place to study here in the first place. And, last but not least, I am grateful to my mother, the rest of my family, & my friends, who have been endlessly supportive through the last few months. Contents 1 Introduction 6 1.1 Objectives . 7 1.2 Contributions . 8 2 Background 9 2.1 Note on threading . 9 2.2 Software Performance . 9 2.2.1 Tracing . 10 2.2.2 Sampling . 10 2.2.3 Categorising performance analysis . 11 2.3 Task-based parallelism - TBB . 12 2.4 Overview of recent profiling tools and approaches . 12 2.4.1 wPerf . 12 2.4.2 Coz, and causal profiling . 13 2.4.3 TaskProf . 13 2.4.4 GAPP . 13 2.4.5 Comparison of GAPP and wPerf . 14 2.5 Synchronisation . 14 2.5.1 Background . 14 2.5.2 Processor-level synchronisation . 14 2.5.2.1 Summary for x86 . 15 2.5.3 Synchronisation primitives . 15 2.5.3.1 Types of primitives . 15 2.5.3.2 Spinning . 16 2 2.5.4 Futexes . 17 2.6 eBPF: tracing in the Linux kernel . 19 2.6.1 BCC . 20 2.6.2 Performance of eBPF . 21 3 Extending GAPP 23 3.1 Overview . 23 3.1.1 Goal . 23 3.1.2 Implementation . 23 3.1.3 Feature summary . 24 3.1.4 Comparison . 25 3.1.5 Backend summary . 26 3.2 Causation flag system . 27 3.3 Identifying Synchronisation Bottlenecks . 29 3.3.1 Limitations of return probes . 29 3.3.1.1 Avoiding this issue . 31 3.3.2 Summary . 32 3.3.3 Tracking futex wait operations . 33 3.3.4 Tracking futex wake operations . 34 3.4 Identifying IO Bottlenecks . 35 3.4.1 Identifying IO Bottlenecks - Files . 36 3.4.2 Identifying IO Bottlenecks - Networking . 38 3.4.3 Identifying IO Bottlenecks - Reads and Writes . 38 3.5 Modifications to the core GAPP algorithm . 39 3.6 Python front-end processing . 40 3.6.1 Additional synchronisation stack traces . 40 3.6.2 Lock page . 41 3.6.3 Most Critical Futex Activity . 41 3.6.4 Most Critical File Activity . 42 3.7 User Experience . 43 3.7.1 Tracing relevant threads . 43 3.7.2 Enhanced Stack Trace Reporting . 44 3 3.7.3 User Interface . 45 3.8 Engineering larger systems with BCC and eBPF . 46 3.8.1 Separation of code . 46 3.8.2 Debugging issues . 49 4 Evaluation 51 4.1 Setup . 51 4.2 Note on glibc stack traces and parent threads . 51 4.3 Individual feature evaluation . 52 4.3.1 Elementary synchronisation . 52 4.3.2 Shared locks . 55 4.3.3 Multiple locks . 57 4.3.4 IO - File reading . 59 4.3.5 IO - File writing . 62 4.4 Alternative threading library . 64 4.4.1 Ad-hoc threading . 64 4.4.2 TBB . 65 4.5 Real-world Benchmarks & Programs . 68 4.5.1 Parsec - Bodytrack . 68 4.5.2 Cuberite - A Minecraft Server . 71 4.6 Quantifying errors . 75 4.6.1 Evaluating error count in Cuberite . 76 4.6.2 Missing futex traces . 76 5 Conclusion 77 5.1 Summary . 77 5.2 Future Work . 77 5.2.1 DNS Probing . 77 5.2.2 Conflation of identically-defined locks . 78 5.2.3 Ad-hoc synchronisation . 78 5.2.4 Unit testing . 78 A Additional Screenshots 82 4 B Evaluation Code Listings 86 5 1 | Introduction Improvements in the processing power of a single core have been considerably slowing in the past decade, compared to the prior decades. One of the most powerful consumer desktop CPUs today, the Intel i9-9900KF [1], ships with a 3.6GHz clock speed as standard; little more than the 3.46GHz of the Intel i7-990X [2] from 2011. Instead, core counts are increasing, and manufacturers are implementing technologies such as SMT (Simultaneous Multithreading, a.k.a. hyperthreading), which enable an individual core to run multiple threads concurrently. Almost every modern CPU [3] offers at least four to eight cores, and even a single chip made for a high performance server can contain dozens of cores. Accordingly, software applications are transitioning towards models which take advantage of multiple processor cores. From a developer’s perspective, the runtime operations of multi-threaded applications are generally harder to reason about than an equivalent single-threaded application. Ad- ditionally, traditional performance metrics and measurement tools designed for single- threaded applications are not as effective when applied to multi-threaded programs, be- cause of extra inefficiencies and forms of bottlenecks which are introduced by the paral- lelism. Most obviously, there is the direct overhead introduced by the threading itself - for example, context switches caused by moving threads on and off CPU cores can add signifi- cant overhead to an otherwise efficient application. This overhead is important, but by far the most significant performance issues arise due to suboptimal thread synchronisation. When threads wait on each-other (or even on IO devices such as a disk) when they could be doing other work, a program can suffer significant performance losses. On an eight core machine, an application with eight threads can perform worse than a single threaded version if there is significant contention for a single resource, or if locking is too coarse. Fixing these sorts of issues can be tricky, but it is often just as tricky to find the cause in the first place: it is far from trivial to know in advance which parts of an application will cause performance issues, and this difficulty is only worsened when we start trying to reason about the runtime of multithreaded software with complex inter-thread interactions. Considering all of this, it is clear that there is a need for effective, straightforward, and accurate tools for developers to profile and diagnose inefficiencies in their applications. They must be effective so that they can help achieve large performance improvements, straightforward in order to enable mass-adoption by developers, and accurate so as to avoid developers losing faith in the tool. 6 1.1 Objectives There exists a wide body of research focused on identifying bottlenecks in multithreaded applications [4, 5, 6, 7], much of which assumes the use of a specific threading library [6] or focuses on a specific concurrency model such as task-parallel programming (Intel’s TBB, Cilk, etc.) [5]. Research which takes a more general approach is fairly recent [4, 7, 8]. This project extends the bottleneck detection tool GAPP [8], which detects synchro- nisation bottlenecks using a generic approach that avoids introducing a dependency on a specific threading library or language, and does not require the program under analysis to be instrumented in any way. In its present form, GAPP utilises tracing features in the kernel (its inner workings are described further in subsection 2.4.4) to identify when a significant number of threads in an application are in a non-runnable state (e.g. waiting on a lock), and while this is true, any application threads that are descheduled have a stack trace sample taken and reported. Additionally, there is periodic sampling of the threads to accumulate a set of critical functions and lines which were executing by the thread prior to being descheduled.
Recommended publications
  • Demarinis Kent Williams-King Di Jin Rodrigo Fonseca Vasileios P
    sysfilter: Automated System Call Filtering for Commodity Software Nicholas DeMarinis Kent Williams-King Di Jin Rodrigo Fonseca Vasileios P. Kemerlis Department of Computer Science Brown University Abstract This constant stream of additional functionality integrated Modern OSes provide a rich set of services to applications, into modern applications, i.e., feature creep, not only has primarily accessible via the system call API, to support the dire effects in terms of security and protection [1, 71], but ever growing functionality of contemporary software. How- also necessitates a rich set of OS services: applications need ever, despite the fact that applications require access to part of to interact with the OS kernel—and, primarily, they do so the system call API (to function properly), OS kernels allow via the system call (syscall) API [52]—in order to perform full and unrestricted use of the entire system call set. This not useful tasks, such as acquiring or releasing memory, spawning only violates the principle of least privilege, but also enables and terminating additional processes and execution threads, attackers to utilize extra OS services, after seizing control communicating with other programs on the same or remote of vulnerable applications, or escalate privileges further via hosts, interacting with the filesystem, and performing I/O and exploiting vulnerabilities in less-stressed kernel interfaces. process introspection. To tackle this problem, we present sysfilter: a binary Indicatively, at the time of writing, the Linux
    [Show full text]
  • Thread Evolution Kit for Optimizing Thread Operations on CE/Iot Devices
    Thread Evolution Kit for Optimizing Thread Operations on CE/IoT Devices Geunsik Lim , Student Member, IEEE, Donghyun Kang , and Young Ik Eom Abstract—Most modern operating systems have adopted the the threads running on CE/IoT devices often unintentionally one-to-one thread model to support fast execution of threads spend a significant amount of time in taking the CPU resource in both multi-core and single-core systems. This thread model, and the frequency of context switch rapidly increases due to which maps the kernel-space and user-space threads in a one- to-one manner, supports quick thread creation and termination the limited system resources, degrading the performance of in high-performance server environments. However, the perfor- the system significantly. In addition, since CE/IoT devices mance of time-critical threads is degraded when multiple threads usually have limited memory space, they may suffer from the are being run in low-end CE devices with limited system re- segmentation fault [16] problem incurred by memory shortages sources. When a CE device runs many threads to support diverse as the number of threads increases and they remain running application functionalities, low-level hardware specifications often lead to significant resource contention among the threads trying for a long time. to obtain system resources. As a result, the operating system Some engineers have attempted to address the challenges encounters challenges, such as excessive thread context switching of IoT environments such as smart homes by using better overhead, execution delay of time-critical threads, and a lack of hardware specifications for CE/IoT devices [3], [17]–[21].
    [Show full text]
  • Red Hat Enterprise Linux for Real Time 7 Tuning Guide
    Red Hat Enterprise Linux for Real Time 7 Tuning Guide Advanced tuning procedures for Red Hat Enterprise Linux for Real Time Radek Bíba David Ryan Cheryn Tan Lana Brindley Alison Young Red Hat Enterprise Linux for Real Time 7 Tuning Guide Advanced tuning procedures for Red Hat Enterprise Linux for Real Time Radek Bíba Red Hat Customer Content Services [email protected] David Ryan Red Hat Customer Content Services [email protected] Cheryn Tan Red Hat Customer Content Services Lana Brindley Red Hat Customer Content Services Alison Young Red Hat Customer Content Services Legal Notice Copyright © 2015 Red Hat, Inc. This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
    [Show full text]
  • SUSE Linux Enterprise Server 12 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 12 SP4
    SUSE Linux Enterprise Server 12 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 12 SP4 An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to eciently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources. Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see https://www.suse.com/company/legal/ . All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its aliates. Asterisks (*) denote third-party trademarks. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xii 1 Available Documentation xiii
    [Show full text]
  • Greg Kroah-Hartman [email protected] Github.Com/Gregkh/Presentation-Kdbus
    kdbus IPC for the modern world Greg Kroah-Hartman [email protected] github.com/gregkh/presentation-kdbus Interprocess Communication ● signal ● synchronization ● communication standard signals realtime The Linux Programming Interface, Michael Kerrisk, page 878 POSIX semaphore futex synchronization named eventfd unnamed semaphore System V semaphore “record” lock file lock file lock mutex threads condition variables barrier read/write lock The Linux Programming Interface, Michael Kerrisk, page 878 data transfer pipe communication FIFO stream socket pseudoterminal POSIX message queue message System V message queue memory mapping System V shared memory POSIX shared memory shared memory memory mapping Anonymous mapping mapped file The Linux Programming Interface, Michael Kerrisk, page 878 Android ● ashmem ● pmem ● binder ashmem ● POSIX shared memory for the lazy ● Uses virtual memory ● Can discard segments under pressure ● Unknown future pmem ● shares memory between kernel and user ● uses physically contigous memory ● GPUs ● Unknown future binder ● IPC bus for Android system ● Like D-Bus, but “different” ● Came from system without SysV types ● Works on object / message level ● Needs large userspace library ● NEVER use outside an Android system binder ● File descriptor passing ● Used for Intents and application separation ● Good for small messages ● Not for streams of data ● NEVER use outside an Android system QNX message passing ● Tight coupling to microkernel ● Send message and control, to another process ● Used to build complex messages
    [Show full text]
  • Futexes Are Tricky
    Futexes Are Tricky Ulrich Drepper Red Hat, Inc. [email protected] January 31, 2008 Abstract Starting with early version of the 2.5 series, the Linux kernel contains a light-weight method for process synchronization. It is used in the modern thread library implementation but is also useful when used directly. This article introduces the concept and user level code to use them. 1 Preface addr1. It has a size of 4 bytes on all platforms, 32-bit and 64-bit. The value of the variable is fully under the control 1 The base reference for futexes has been “Fuss, Futexes of the application. No value has a specific meaning. and Furwocks: Fast User Level Locking in Linux” writ- ten by Franke, Russell, and Kirkwood, released in the Any memory address in regular memory (excluding some- proceedings of the 2002 OLS [1]. This document is still thing like DMA areas etc) can be used for the futex. The mostly valid. But the kernel functionality got extended only requirement is that the variable is aligned at a mul- and generally improved. The biggest weakness, though, tiple of sizeof(int). is the lack of instruction on how to use futexes correctly. Rusty Russell distributes a package containing user level It is not obvious from the prototype, but the kernel han- code (ftp://ftp.kernel.org/pub/linux/kernel/people/rusty/) dles the actual physical addresses of the futexes. I.e., if but unfortunately this code is not very well documented two processes reference a futex in a memory region they and worse, as of this writing the code is actually incor- share, they will reference the same futex object.
    [Show full text]
  • Petalinux Tools Documentation: Reference Guide
    See all versions of this document PetaLinux Tools Documentation Reference Guide UG1144 (v2021.1) June 16, 2021 Revision History Revision History The following table shows the revision history for this document. Section Revision Summary 06/16/2021 Version 2021.1 Chapter 7: Customizing the Project Added a new section: Configuring UBIFS Boot. Chapter 5: Booting and Packaging Updated Steps to Boot a PetaLinux Image on Hardware with SD Card. Appendix A: Migration Added FPGA Manager Changes, Yocto Recipe Name Changes, Host GCC Version Upgrade. Chapter 10: Advanced Configurations Updated U-Boot Configuration and Image Packaging Configuration. UG1144 (v2021.1) June 16, 2021Send Feedback www.xilinx.com PetaLinux Tools Documentation Reference Guide 2 Table of Contents Revision History...............................................................................................................2 Chapter 1: Overview.................................................................................................... 8 Introduction................................................................................................................................. 8 Navigating Content by Design Process.................................................................................... 9 Chapter 2: Setting Up Your Environment...................................................... 11 Installation Steps.......................................................................................................................11 PetaLinux Working Environment Setup................................................................................
    [Show full text]
  • Systemtap Beginners Guide
    Red Hat Enterprise Linux 7 SystemTap Beginners Guide Introduction to SystemTap Last Updated: 2020-09-29 Red Hat Enterprise Linux 7 SystemTap Beginners Guide Introduction to SystemTap William Cohen Red Hat Software Engineering [email protected] Don Domingo Red Hat Customer Content Services Vladimír Slávik Red Hat Customer Content Services [email protected] Robert Kratky Red Hat Customer Content Services Jacquelynn East Red Hat Customer Content Services Legal Notice Copyright © 2019 Red Hat, Inc. This document is licensed by Red Hat under the Creative Commons Attribution-ShareAlike 3.0 Unported License. If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original. If the document is modified, all Red Hat trademarks must be removed. Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law. Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux ® is the registered trademark of Linus Torvalds in the United States and other countries. Java ® is a registered trademark of Oracle and/or its affiliates. XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
    [Show full text]
  • How to Run POSIX Apps in a Minimal Picoprocess Jon Howell, Bryan Parno, John R
    How to Run POSIX Apps in a Minimal Picoprocess Jon Howell, Bryan Parno, John R. Douceur Microsoft Research, Redmond, WA Abstract Libraries We envision a future where Web, mobile, and desktop Application Function # Examples applications are delivered as isolated, complete software Abiword word processor 63 Pango,Freetype stacks to a minimal, secure client host. This shift imbues Gimp raster graphics 55 Gtk,Gdk Gnucash personal finances 101 Gnome,Enchant app vendors with full autonomy to maintain their apps’ Gnumeric spreadsheet 54 Gtk,Gdk integrity. Achieving this goal requires shifting complex Hyperoid video game 6 svgalib behavior out of the client platform and into the vendors’ Inkscape vector drawing 96 Magick,Gnome isolated apps. We ported rich, interactive POSIX apps, Marble 3D globe 73 KDE, Qt such as Gimp and Inkscape, to a spartan host platform. Midori HTML/JS renderer 74 webkit We describe this effort in sufficient detail to support re- producibility. Table 1: A variety of rich, functional apps transplanted to run in a minimal native picoprocess. While these 1 Introduction apps are nearly fully functional, plugins that depend on fork() are not yet supported (§3.9). Numerous academic systems [5, 11, 13, 15, 19, 22, 25–28, 31] and deployed systems [1–3, 23] have started pushing towards a world in which Web, mobile, and multaneously [16]. It pushes the minimal client host in- desktop applications are strongly isolated by the client terface to an extreme, proposing a client host without kernel. A common theme in this work is that guarantee- TCP, a file system or even storage, and with a UI con- ing strong isolation requires simplifying the client, since strained to simple pixel blitting (i.e., copying pixel arrays complexity tends to breed vulnerability.
    [Show full text]
  • Thread Synchronization: Implementation
    Operating Systems Thread Synchronization: Implementation Thomas Ropars [email protected] 2020 1 References The content of these lectures is inspired by: • The lecture notes of Prof. Andr´eSchiper. • The lecture notes of Prof. David Mazi`eres. • Operating Systems: Three Easy Pieces by R. Arpaci-Dusseau and A. Arpaci-Dusseau Other references: • Modern Operating Systems by A. Tanenbaum • Operating System Concepts by A. Silberschatz et al. 2 Agenda Reminder Goals of the lecture Mutual exclusion: legacy solutions Atomic operations Spinlocks Sleeping locks About priorities 3 Agenda Reminder Goals of the lecture Mutual exclusion: legacy solutions Atomic operations Spinlocks Sleeping locks About priorities 4 Previous lecture Concurrent programming requires thread synchronization. The problem: Threads executing on a shared-memory (multi-)processor is an asynchronous system. • A thread can be preempted at any time. • Reading/writing a data in memory incurs unpredictable delays (data in L1 cache vs page fault). 5 Previous lecture Classical concurrent programming problems • Mutual exclusion • Producer-consumer Concepts related to concurrent programming • Critical section • Deadlock • Busy waiting Synchronization primitives • Locks • Condition variables • Semaphores 6 Agenda Reminder Goals of the lecture Mutual exclusion: legacy solutions Atomic operations Spinlocks Sleeping locks About priorities 7 High-level goals How to implement synchronization primitives? Answering this question is important to: • Better understand the semantic of the primitives • Learn about the interactions with the OS • Learn about the functioning of memory • Understand the trade-offs between different solutions 8 Content of the lecture Solutions to implement mutual exclusion • Peterson's algorithm • Spinlocks • Sleeping locks Basic mechanisms used for synchronization • Atomic operations (hardware) • Futex (OS) 9 Agenda Reminder Goals of the lecture Mutual exclusion: legacy solutions Atomic operations Spinlocks Sleeping locks About priorities 10 A shared counter (remember .
    [Show full text]
  • Glibc and System Call Wrappers
    glibc and system call wrappers Florian Weimer, Red Hat Platform Tools Team Virtual Linux Plumbers, 2020-08-28 1 / 17 Outline I Why do we have system call wrappers? I How can we add them to glibc? I Do we actually want to do that? I What can the kernel do to make things easier? I Poll: What do you work on? I A: kernel, B: userspace toolchain (compiler, core libraries), C: other userspace, D: something else 2 / 17 What are system call wrappers? I off64_t lseek(int fd, off64_t off, int whence); lseek: movl $8, %eax /* syscall number */ syscall cmpq $-4096, %rax /* magic errno limit */ ja 1f /* handle error */ ret 1: movq __libc_errno@gottpoff(%rip), %rdx negl %eax movl %eax, %fs:(%rdx) /* update errno TLS */ movq $-1, %rax ret 3 / 17 Why bother? I Can we just use a generic wrapper? I syscall(__NR_lseek, fd, 0, SEEK_SET); 4 / 17 Why bother? Portability! I Need to use the correct types: syscall(__NR_lseek, fd, (off64_t) 0, SEEK_SET); I Need to use the correct system call: off64_t off; syscall(__NR__llseek, fd, 0L, 0L, &off, SEEK_SET); I This is more common than you would think (open ! openat, futex ! futex_time64). 5 / 17 glibc lseek (without symbol management) off64_t lseek (int fd, off64_t offset, int whence) { #ifdef __NR__llseek loff_t res; int rc = INLINE_SYSCALL_CALL (_llseek, fd, (long) (((uint64_t) (offset)) >> 32), (long) offset, &res, whence); return rc ? rc : res; #else return INLINE_SYSCALL_CALL (lseek, fd, offset, whence); #endif } 6 / 17 glibc implementation options I C with INLINE_SYSCALL_CALL: automatic errno handling I C with
    [Show full text]
  • Performance Evaluation of Container-Based Virtualization for High Performance Computing Environments
    Performance Evaluation of Container-based Virtualization for High Performance Computing Environments Carlos Arango1,Remy´ Dernat3, John Sanabria2 Abstract— Virtualization technologies have evolved along to meet the needs of these problems. Those federations of with the development of computational environments since vir- clusters are known as Grid systems. tualization offered needed features at that time such as isolation, Grid systems offer virtual organizations which integrate accountability, resource allocation, resource fair sharing and so on. Novel processor technologies bring to commodity computers users and computational resources abroad. Thus, multiple the possibility to emulate diverse environments where a wide virtual organizations are consolidated world wide tackling range of computational scenarios can be run. Along with diverse problems (e.g. cancer cure, search for fundamental processors evolution, system developers have created different particles and sequencing genomes, among others) then re- virtualization mechanisms where each new development en- quiring diverse services and applications. hanced the performance of previous virtualized environments. Recently, operating system-based virtualization technologies This babel of tools presents a challenging problem for captured the attention of communities abroad (from industry to system administrators who have to deal with library versions, academy and research) because their important improvements dependencies and software compatibility. on performance area. Virtualization is not a new technology [36] but it has been In this paper, the features of three container-based operating recently reactivated because of the advantages that it exhibits. systems virtualization tools (LXC, Docker and Singularity) are presented. LXC, Docker, Singularity and bare metal are put Nowadays, off the shelf processors incorporate optimized under test through a customized single node HPL-Benchmark virtualization instructions to support the deployment of secu- and a MPI-based application for the multi node testbed.
    [Show full text]