(RTOS) Is an Operating System (OS) Intended to Serve Real-Time Application Requests
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Demystifying the Real-Time Linux Scheduling Latency
Demystifying the Real-Time Linux Scheduling Latency Daniel Bristot de Oliveira Red Hat, Italy [email protected] Daniel Casini Scuola Superiore Sant’Anna, Italy [email protected] Rômulo Silva de Oliveira Universidade Federal de Santa Catarina, Brazil [email protected] Tommaso Cucinotta Scuola Superiore Sant’Anna, Italy [email protected] Abstract Linux has become a viable operating system for many real-time workloads. However, the black-box approach adopted by cyclictest, the tool used to evaluate the main real-time metric of the kernel, the scheduling latency, along with the absence of a theoretically-sound description of the in-kernel behavior, sheds some doubts about Linux meriting the real-time adjective. Aiming at clarifying the PREEMPT_RT Linux scheduling latency, this paper leverages the Thread Synchronization Model of Linux to derive a set of properties and rules defining the Linux kernel behavior from a scheduling perspective. These rules are then leveraged to derive a sound bound to the scheduling latency, considering all the sources of delays occurring in all possible sequences of synchronization events in the kernel. This paper also presents a tracing method, efficient in time and memory overheads, to observe the kernel events needed to define the variables used in the analysis. This results in an easy-to-use tool for deriving reliable scheduling latency bounds that can be used in practice. Finally, an experimental analysis compares the cyclictest and the proposed tool, showing that the proposed method can find sound bounds faster with acceptable overheads. 2012 ACM Subject Classification Computer systems organization → Real-time operating systems Keywords and phrases Real-time operating systems, Linux kernel, PREEMPT_RT, Scheduling latency Digital Object Identifier 10.4230/LIPIcs.ECRTS.2020.9 Supplementary Material ECRTS 2020 Artifact Evaluation approved artifact available at https://doi.org/10.4230/DARTS.6.1.3. -
Interrupt Handling in Linux
Department Informatik Technical Reports / ISSN 2191-5008 Valentin Rothberg Interrupt Handling in Linux Technical Report CS-2015-07 November 2015 Please cite as: Valentin Rothberg, “Interrupt Handling in Linux,” Friedrich-Alexander-Universitat¨ Erlangen-Nurnberg,¨ Dept. of Computer Science, Technical Reports, CS-2015-07, November 2015. Friedrich-Alexander-Universitat¨ Erlangen-Nurnberg¨ Department Informatik Martensstr. 3 · 91058 Erlangen · Germany www.cs.fau.de Interrupt Handling in Linux Valentin Rothberg Distributed Systems and Operating Systems Dept. of Computer Science, University of Erlangen, Germany [email protected] November 8, 2015 An interrupt is an event that alters the sequence of instructions executed by a processor and requires immediate attention. When the processor receives an interrupt signal, it may temporarily switch control to an inter- rupt service routine (ISR) and the suspended process (i.e., the previously running program) will be resumed as soon as the interrupt is being served. The generic term interrupt is oftentimes used synonymously for two terms, interrupts and exceptions [2]. An exception is a synchronous event that occurs when the processor detects an error condition while executing an instruction. Such an error condition may be a devision by zero, a page fault, a protection violation, etc. An interrupt, on the other hand, is an asynchronous event that occurs at random times during execution of a pro- gram in response to a signal from hardware. A proper and timely handling of interrupts is critical to the performance, but also to the security of a computer system. In general, interrupts can be emitted by hardware as well as by software. Software interrupts (e.g., via the INT n instruction of the x86 instruction set architecture (ISA) [5]) are means to change the execution context of a program to a more privileged interrupt context in order to enter the kernel and, in contrast to hardware interrupts, occur synchronously to the currently running program. -
Microkernel Mechanisms for Improving the Trustworthiness of Commodity Hardware
Microkernel Mechanisms for Improving the Trustworthiness of Commodity Hardware Yanyan Shen Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy School of Computer Science and Engineering Faculty of Engineering March 2019 Thesis/Dissertation Sheet Surname/Family Name : Shen Given Name/s : Yanyan Abbreviation for degree as give in the University calendar : PhD Faculty : Faculty of Engineering School : School of Computer Science and Engineering Microkernel Mechanisms for Improving the Trustworthiness of Commodity Thesis Title : Hardware Abstract 350 words maximum: (PLEASE TYPE) The thesis presents microkernel-based software-implemented mechanisms for improving the trustworthiness of computer systems based on commercial off-the-shelf (COTS) hardware that can malfunction when the hardware is impacted by transient hardware faults. The hardware anomalies, if undetected, can cause data corruptions, system crashes, and security vulnerabilities, significantly undermining system dependability. Specifically, we adopt the single event upset (SEU) fault model and address transient CPU or memory faults. We take advantage of the functional correctness and isolation guarantee provided by the formally verified seL4 microkernel and hardware redundancy provided by multicore processors, design the redundant co-execution (RCoE) architecture that replicates a whole software system (including the microkernel) onto different CPU cores, and implement two variants, loosely-coupled redundant co-execution (LC-RCoE) and closely-coupled redundant co-execution (CC-RCoE), for the ARM and x86 architectures. RCoE treats each replica of the software system as a state machine and ensures that the replicas start from the same initial state, observe consistent inputs, perform equivalent state transitions, and thus produce consistent outputs during error-free executions. -
Process Scheduling Ii
PROCESS SCHEDULING II CS124 – Operating Systems Spring 2021, Lecture 12 2 Real-Time Systems • Increasingly common to have systems with real-time scheduling requirements • Real-time systems are driven by specific events • Often a periodic hardware timer interrupt • Can also be other events, e.g. detecting a wheel slipping, or an optical sensor triggering, or a proximity sensor reaching a threshold • Event latency is the amount of time between an event occurring, and when it is actually serviced • Usually, real-time systems must keep event latency below a minimum required threshold • e.g. antilock braking system has 3-5 ms to respond to wheel-slide • The real-time system must try to meet its deadlines, regardless of system load • Of course, may not always be possible… 3 Real-Time Systems (2) • Hard real-time systems require tasks to be serviced before their deadlines, otherwise the system has failed • e.g. robotic assembly lines, antilock braking systems • Soft real-time systems do not guarantee tasks will be serviced before their deadlines • Typically only guarantee that real-time tasks will be higher priority than other non-real-time tasks • e.g. media players • Within the operating system, two latencies affect the event latency of the system’s response: • Interrupt latency is the time between an interrupt occurring, and the interrupt service routine beginning to execute • Dispatch latency is the time the scheduler dispatcher takes to switch from one process to another 4 Interrupt Latency • Interrupt latency in context: Interrupt! Task -
Isolation, Resource Management, and Sharing in Java
Processes in KaffeOS: Isolation, Resource Management, and Sharing in Java Godmar Back, Wilson C. Hsieh, Jay Lepreau School of Computing University of Utah Abstract many environments for executing untrusted code: for example, applets, servlets, active packets [41], database Single-language runtime systems, in the form of Java queries [15], and kernel extensions [6]. Current systems virtual machines, are widely deployed platforms for ex- (such as Java) provide memory protection through the ecuting untrusted mobile code. These runtimes pro- enforcement of type safety and secure system services vide some of the features that operating systems pro- through a number of mechanisms, including namespace vide: inter-application memory protection and basic sys- and access control. Unfortunately, malicious or buggy tem services. They do not, however, provide the ability applications can deny service to other applications. For to isolate applications from each other, or limit their re- example, a Java applet can generate excessive amounts source consumption. This paper describes KaffeOS, a of garbage and cause a Web browser to spend all of its Java runtime system that provides these features. The time collecting it. KaffeOS architecture takes many lessons from operating To support the execution of untrusted code, type-safe system design, such as the use of a user/kernel bound- language runtimes need to provide a mechanism to iso- ary, and employs garbage collection techniques, such as late and manage the resources of applications, analogous write barriers. to that provided by operating systems. Although other re- The KaffeOS architecture supports the OS abstraction source management abstractions exist [4], the classic OS of a process in a Java virtual machine. -
Libresource - Getting System Resource Information with Standard Apis Tuesday, 13 November 2018 14:55 (15 Minutes)
Linux Plumbers Conference 2018 Contribution ID: 255 Type: not specified libresource - Getting system resource information with standard APIs Tuesday, 13 November 2018 14:55 (15 minutes) System resource information, like memory, network and device statistics, are crucial for system administrators to understand the inner workings of their systems, and are increasingly being used by applications to fine tune performance in different environments. Getting system resource information on Linux is not a straightforward affair. The best way is tocollectthe information from procfs or sysfs, but getting such information from procfs or sysfs presents many challenges. Each time an application wants to get a system resource information, it has to open a file, read the content and then parse the content to get actual information. If application is running in a container then even reading from procfs directly may give information about resources on whole system instead of just the container. Libresource tries to fix few of these problems by providing a standard library with set of APIs through which we can get system resource information e.g. memory, CPU, stat, networking, security related information. Libresource provides/may provide following benefits: • Virtualization: In cases where application is running in a virtualized environment using cgroup or namespaces, reading from /proc and /sys file-systems might not give correct information as these are not cgroup aware. Library API will take care of this e.g. if a process is running in a cgroup then library should provide information which is local to that cgroup. • Ease of use: Currently applications needs to read this info mostly from /proc and /sys file-systems. -
Introduction to Performance Management
C HAPTER 1 Introduction to Performance Management Application developers and system administrators face similar challenges in managing the performance of a computer system. Performance manage- ment starts with application design and development and migrates to administration and tuning of the deployed production system or systems. It is necessary to keep performance in mind at all stages in the development and deployment of a system and application. There is a definite over- lap in the responsibilities of the developer and administrator. Sometimes determining where one ends and the other begins is more difficult when a single person or small group develops, admin- isters and uses the application. This chapter will look at: • Application developer’s perspective • System administrator’s perspective • Total system resource perspective • Rules of performance tuning 1.1 Application Developer’s Perspective The tasks of the application developer include: • Defining the application • Determining the specifications • Designing application components • Developing the application codes • Testing, tuning, and debugging • Deploying the system and application • Maintaining the system and application 3 4 Chapter 1 • Introduction to Performance Management 1.1.1 Defining the Application The first step is to determine what the application is going to be. Initially, management may need to define the priorities of the development group. Surveys of user organizations may also be carried out. 1.1.2 Determining Application Specifications Defining what the application will accomplish is necessary before any code is written. The users and developers should agree, in advance, on the particular features and/or functionality that the application will provide. Often, performance specifications are agreed upon at this time, and these are typically expressed in terms of user response time or system throughput measures. -
Efficient Interrupts on Cortex-M Microcontrollers
Efficient Interrupts on Cortex-M Microcontrollers Chris Shore Training Manager ARM Ltd Cambridge, UK [email protected] Abstract—The design of real-time embedded systems involves So, assuming that we have a system which is powerful a constant trade-off between meeting real-time design goals and enough to meet all its real-time design constraints, we have a operating within power and energy budgets. This paper explores secondary problem. how the architectural features of ARM® Cortex®-M microcontrollers can be used to maximize power efficiency while That problem is making the system as efficient as possible. ensuring that real-time goals are met. If we have sufficient processing power, it is relatively trivial to conceive and implement a design which meets all time-related Keywords—interrupts; microcontroller; real-time; power goals but such a system might be extremely inefficient when it efficiency. comes to energy consumption. The requirement to minimize energy consumption introduces another set of goals and I. INTRODUCTION constraints. It requires us to make use of sleep modes and low A “real-time” system is one which has to react to external power modes in the hardware, to use the most efficient events within certain time constraints. Wikipedia says: methods for switching from one event to another and to maximize “idle time” in the design to make opportunities to “A system is real-time if the total correctness on an use low power modes. operation depends not only on its logical correctness but also upon the time in which it is performed.” The battle is between meeting the time constraints while using the least amount of energy in doing so. -
Linux® Resource Administration Guide
Linux® Resource Administration Guide 007–4413–004 CONTRIBUTORS Written by Terry Schultz Illustrated by Chris Wengelski Production by Karen Jacobson Engineering contributions by Jeremy Brown, Marlys Kohnke, Paul Jackson, John Hesterberg, Robin Holt, Kevin McMahon, Troy Miller, Dennis Parker, Sam Watters, and Todd Wyman COPYRIGHT © 2002–2004 Silicon Graphics, Inc. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of Silicon Graphics, Inc. LIMITED RIGHTS LEGEND The electronic (software) version of this document was developed at private expense; if acquired under an agreement with the USA government or any contractor thereto, it is acquired as "commercial computer software" subject to the provisions of its applicable license agreement, as specified in (a) 48 CFR 12.212 of the FAR; or, if acquired for Department of Defense units, (b) 48 CFR 227-7202 of the DoD FAR Supplement; or sections succeeding thereto. Contractor/manufacturer is Silicon Graphics, Inc., 1600 Amphitheatre Pkwy 2E, Mountain View, CA 94043-1351. TRADEMARKS AND ATTRIBUTIONS Silicon Graphics, SGI, the SGI logo, and IRIX are registered trademarks and SGI Linux and SGI ProPack for Linux are trademarks of Silicon Graphics, Inc., in the United States and/or other countries worldwide. SGI Advanced Linux Environment 2.1 is based on Red Hat Linux Advanced Server 2.1 for the Itanium Processor, but is not sponsored by or endorsed by Red Hat, Inc. -
SUSE Linux Enterprise Server 11 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 11 SP4
SUSE Linux Enterprise Server 11 SP4 System Analysis and Tuning Guide System Analysis and Tuning Guide SUSE Linux Enterprise Server 11 SP4 Publication Date: September 24, 2021 SUSE LLC 1800 South Novell Place Provo, UT 84606 USA https://documentation.suse.com Copyright © 2006– 2021 SUSE LLC and contributors. All rights reserved. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”. For SUSE trademarks, see http://www.suse.com/company/legal/ . All other third party trademarks are the property of their respective owners. A trademark symbol (®, ™ etc.) denotes a SUSE or Novell trademark; an asterisk (*) denotes a third party trademark. All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its aliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof. Contents About This Guide xi 1 Available Documentation xii 2 Feedback xiv 3 Documentation Conventions xv I BASICS 1 1 General Notes on System Tuning 2 1.1 Be Sure What Problem to Solve 2 1.2 Rule Out Common Problems 3 1.3 Finding the Bottleneck 3 1.4 Step-by-step Tuning 4 II SYSTEM MONITORING 5 2 System Monitoring Utilities 6 2.1 Multi-Purpose Tools 6 vmstat 7 -
Thread Verification Vs. Interrupt Verification
Thread Verification vs. Interrupt Verification John Regehr School of Computing University of Utah ABSTRACT to verifiers for thread-based programs that per- Interrupts are superficially similar to threads, but there are mit them to also check interrupt-based programs. subtle semantic differences between the two abstractions. A secondary goal is to exploit the semantics of This paper compares and contrasts threads and interrupts interrupts to make checking faster and more pre- from the point of view of verifying the absence of race con- cise. ditions. We identify a small set of extensions that permit Of course, threads come in many flavors. In this paper thread verification tools to also verify interrupt-driven soft- we’ll assume POSIX-style threads [5]: preemptively sched- ware, and we present examples of source-to-source trans- uled blocking threads, scheduled either in the kernel or at formations that turn interrupt-driven code into semanti- user level. cally equivalent thread-based code that can be checked by a thread verifier. 2. THREADS AND INTERRUPTS 1. INTRODUCTION The following are some significant ways in which threads For programs running on general-purpose operating sys- differ from interrupts. tems on PC-class hardware, threads are probably the most important abstraction supporting concurrent programming. Blocking. Interrupt cannot block: they run to completion On the other hand, interrupts are the dominant method for unless preempted by other interrupts. The inability to block expressing concurrency in embedded systems, particularly is very inconvenient, and it is one of the main reasons that those based on small microcontroller units (MCUs). All complex logic should not be implemented in interrupt code. -
IBM Power Systems Virtualization Operation Management for SAP Applications
Front cover IBM Power Systems Virtualization Operation Management for SAP Applications Dino Quintero Enrico Joedecke Katharina Probst Andreas Schauberer Redpaper IBM Redbooks IBM Power Systems Virtualization Operation Management for SAP Applications March 2020 REDP-5579-00 Note: Before using this information and the product it supports, read the information in “Notices” on page v. First Edition (March 2020) This edition applies to the following products: Red Hat Enterprise Linux 7.6 Red Hat Virtualization 4.2 SUSE Linux SLES 12 SP3 HMC V9 R1.920.0 Novalink 1.0.0.10 ipmitool V1.8.18 © Copyright International Business Machines Corporation 2020. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . .v Trademarks . vi Preface . vii Authors. vii Now you can become a published author, too! . viii Comments welcome. viii Stay connected to IBM Redbooks . ix Chapter 1. Introduction. 1 1.1 Preface . 2 Chapter 2. Server virtualization . 3 2.1 Introduction . 4 2.2 Server and hypervisor options . 4 2.2.1 Power Systems models that support PowerVM versus KVM . 4 2.2.2 Overview of POWER8 and POWER9 processor-based hardware models. 4 2.2.3 Comparison of PowerVM and KVM / RHV . 7 2.3 Hypervisors . 8 2.3.1 Introducing IBM PowerVM . 8 2.3.2 Kernel-based virtual machine introduction . 15 2.3.3 Resource overcommitment . 16 2.3.4 Red Hat Virtualization . 17 Chapter 3. IBM PowerVM management and operations . 19 3.1 Shared processor logical partitions . 20 3.1.1 Configuring a shared processor LPAR .