Université De Montréal Low-Impact Operating
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Oracle Solaris: the Carrier-Grade Operating System Technical Brief
An Oracle White Paper February 2011 Oracle Solaris: The Carrier-Grade Operating System Oracle White Paper—Oracle Solaris: The Carrier-Grade OS Executive Summary.............................................................................1 ® Powering Communication—The Oracle Solaris Ecosystem..............3 Integrated and Optimized Stack ......................................................5 End-to-End Security ........................................................................5 Unparalleled Performance and Scalability.......................................6 Increased Reliability ........................................................................7 Unmatched Flexibility ......................................................................7 SCOPE Alliance ..............................................................................7 Security................................................................................................8 Security Hardening and Monitoring .................................................8 Process and User Rights Management...........................................9 Network Security and Encrypted Communications .......................10 Virtualization ......................................................................................13 Oracle VM Server for SPARC .......................................................13 Oracle Solaris Zones .....................................................................14 Virtualized Networking...................................................................15 -
Montavista Linux Carrier Grade Edition
MontaVista Linux Carrier Grade Edition WHITE PAPER ABSTRACT: Communications networks are very different from other kinds of computing applications. Networks require both very high reliability and very high performance. Not only must they process large volumes of data at high speed, but they must do so while limiting their downtime to minutes per year. Network failures can have huge economic consequences and can even result in the loss of life. Additionally, in this post-9/11 world, failure also has a national security dimension. This technical whitepaper introduces MontaVista Linux Carrier Grade Edition— what it is, how it evolved, and how it provides a foundation for developing and deploying carrier-grade applications like those described above. Table of Contents Introduction ................................................................................................................................................................... 5 Intended Audience .................................................................................................................................................... 5 Organization .............................................................................................................................................................. 6 Overview and Background ............................................................................................................................................ 6 Carrier Grade Requirements ..................................................................................................................................... -
Industrial Control Via Application Containers: Migrating from Bare-Metal to IAAS
Industrial Control via Application Containers: Migrating from Bare-Metal to IAAS Florian Hofer, Student Member, IEEE Martin A. Sehr Antonio Iannopollo, Member, IEEE Faculty of Computer Science Corporate Technology EECS Department Free University of Bolzano-Bozen Siemens Corporation University of California Bolzano, Italy Berkeley, CA 94704, USA Berkeley, CA 94720, USA fl[email protected] [email protected] [email protected] Ines Ugalde Alberto Sangiovanni-Vincentelli, Fellow, IEEE Barbara Russo Corporate Technology EECS Department Faculty of Computer Science Siemens Corporation University of California Free University of Bolzano-Bozen Berkeley, CA 94704, USA Berkeley, CA 94720, USA Bolzano, Italy [email protected] [email protected] [email protected] Abstract—We explore the challenges and opportunities of control design full authority over the environment in which shifting industrial control software from dedicated hardware to its software will run, it is not straightforward to determine bare-metal servers or cloud computing platforms using off the under what conditions the software can be executed on cloud shelf technologies. In particular, we demonstrate that executing time-critical applications on cloud platforms is viable based on computing platforms due to resource virtualization. Yet, we a series of dedicated latency tests targeting relevant real-time believe that the principles of Industry 4.0 present a unique configurations. opportunity to explore complementing traditional automation Index Terms—Industrial Control Systems, Real-Time, IAAS, components with a novel control architecture [3]. Containers, Determinism We believe that modern virtualization techniques such as application containerization [3]–[5] are essential for adequate I. INTRODUCTION utilization of cloud computing resources in industrial con- Emerging technologies such as the Internet of Things and trol systems. -
RCU Usage in the Linux Kernel: One Decade Later
RCU Usage In the Linux Kernel: One Decade Later Paul E. McKenney Silas Boyd-Wickizer Jonathan Walpole Linux Technology Center MIT CSAIL Computer Science Department IBM Beaverton Portland State University Abstract unique among the commonly used kernels. Understand- ing RCU is now a prerequisite for understanding the Linux Read-copy update (RCU) is a scalable high-performance implementation and its performance. synchronization mechanism implemented in the Linux The success of RCU is, in part, due to its high perfor- kernel. RCU’s novel properties include support for con- mance in the presence of concurrent readers and updaters. current reading and writing, and highly optimized inter- The RCU API facilitates this with two relatively simple CPU synchronization. Since RCU’s introduction into the primitives: readers access data structures within RCU Linux kernel over a decade ago its usage has continued to read-side critical sections, while updaters use RCU syn- expand. Today, most kernel subsystems use RCU. This chronization to wait for all pre-existing RCU read-side paper discusses the requirements that drove the devel- critical sections to complete. When combined, these prim- opment of RCU, the design and API of the Linux RCU itives allow threads to concurrently read data structures, implementation, and how kernel developers apply RCU. even while other threads are updating them. This paper describes the performance requirements that 1 Introduction led to the development of RCU, gives an overview of the RCU API and implementation, and examines how ker- The first Linux kernel to include multiprocessor support nel developers have used RCU to optimize kernel perfor- is not quite 20 years old. -
Is Parallel Programming Hard, And, If So, What Can You Do About It?
Is Parallel Programming Hard, And, If So, What Can You Do About It? Edited by: Paul E. McKenney Linux Technology Center IBM Beaverton [email protected] December 16, 2011 ii Legal Statement This work represents the views of the authors and does not necessarily represent the view of their employers. IBM, zSeries, and Power PC are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. Linux is a registered trademark of Linus Torvalds. i386 is a trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. Other company, product, and service names may be trademarks or service marks of such companies. The non-source-code text and images in this doc- ument are provided under the terms of the Creative Commons Attribution-Share Alike 3.0 United States li- cense (http://creativecommons.org/licenses/ by-sa/3.0/us/). In brief, you may use the contents of this document for any purpose, personal, commercial, or otherwise, so long as attribution to the authors is maintained. Likewise, the document may be modified, and derivative works and translations made available, so long as such modifications and derivations are offered to the public on equal terms as the non-source-code text and images in the original document. Source code is covered by various versions of the GPL (http://www.gnu.org/licenses/gpl-2.0.html). Some of this code is GPLv2-only, as it derives from the Linux kernel, while other code is GPLv2-or-later. -
Microkernel Mechanisms for Improving the Trustworthiness of Commodity Hardware
Microkernel Mechanisms for Improving the Trustworthiness of Commodity Hardware Yanyan Shen Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy School of Computer Science and Engineering Faculty of Engineering March 2019 Thesis/Dissertation Sheet Surname/Family Name : Shen Given Name/s : Yanyan Abbreviation for degree as give in the University calendar : PhD Faculty : Faculty of Engineering School : School of Computer Science and Engineering Microkernel Mechanisms for Improving the Trustworthiness of Commodity Thesis Title : Hardware Abstract 350 words maximum: (PLEASE TYPE) The thesis presents microkernel-based software-implemented mechanisms for improving the trustworthiness of computer systems based on commercial off-the-shelf (COTS) hardware that can malfunction when the hardware is impacted by transient hardware faults. The hardware anomalies, if undetected, can cause data corruptions, system crashes, and security vulnerabilities, significantly undermining system dependability. Specifically, we adopt the single event upset (SEU) fault model and address transient CPU or memory faults. We take advantage of the functional correctness and isolation guarantee provided by the formally verified seL4 microkernel and hardware redundancy provided by multicore processors, design the redundant co-execution (RCoE) architecture that replicates a whole software system (including the microkernel) onto different CPU cores, and implement two variants, loosely-coupled redundant co-execution (LC-RCoE) and closely-coupled redundant co-execution (CC-RCoE), for the ARM and x86 architectures. RCoE treats each replica of the software system as a state machine and ensures that the replicas start from the same initial state, observe consistent inputs, perform equivalent state transitions, and thus produce consistent outputs during error-free executions. -
RTAI-Lab Tutorial: Scicoslab, Comedi, and Real-Time Control
RTAI-Lab tutorial: Scicoslab, Comedi, and real-time control Roberto Bucher 1 Simone Mannori Thomas Netter 2 May 24, 2010 Summary RTAI-Lab is a tool chain for real-time software and control system development. This tutorial shows how to install the various components: the RTAI real-time Linux kernel, the Comedi interface for control and measurement hardware, the Scicoslab GUI-based CACSD modeling software and associated RTAI-Lab blocks, and the xrtailab interactive oscilloscope. RTAI-Lab’s Scicos blocks are detailed and examples show how to develop elementary block diagrams, automatically generate real-time executables, and add custom elements. 1Main RTAI-Lab developer, person to contact for technical questions: roberto.bucher at supsi.ch, see page 46 Contents 1 Introduction 4 1.1 RTAI-Lab tool chain . .4 1.2 Commercial software . .4 2 Installation 5 2.1 Requirements . .5 2.1.1 Hardware requirements . .5 2.1.2 Software requirements . .6 2.2 Mesa library . .7 2.3 EFLTK library . .7 2.4 Linux kernel and RTAI patch . .7 2.5 Comedilib . .8 2.6 RTAI (1st pass) . .8 2.7 RTAI tests . .9 2.8 Comedi . .9 2.9 RTAI (2nd pass) . 10 2.10 ScicosLab . 11 2.11 RTAI-Lab add-ons to Scicoslab-4.4 . 11 2.12 User configuration for scicoslab-4.4 . 11 2.13 Load the modules . 11 3 Development with RTAI-Lab 13 3.1 Boot Linux-RTAI . 13 3.2 Start Scicos . 13 3.3 RTAI-Lib palette . 14 3.4 Real-time sinewave: step by step . 16 3.4.1 Create block diagram . -
Advanced Operating Systems #1
<Slides download> https://www.pf.is.s.u-tokyo.ac.jp/classes Advanced Operating Systems #2 Hiroyuki Chishiro Project Lecturer Department of Computer Science Graduate School of Information Science and Technology The University of Tokyo Room 007 Introduction of Myself • Name: Hiroyuki Chishiro • Project Lecturer at Kato Laboratory in December 2017 - Present. • Short Bibliography • Ph.D. at Keio University on March 2012 (Yamasaki Laboratory: Same as Prof. Kato). • JSPS Research Fellow (PD) in April 2012 – March 2014. • Research Associate at Keio University in April 2014 – March 2016. • Assistant Professor at Advanced Institute of Industrial Technology in April 2016 – November 2017. • Research Interests • Real-Time Systems • Operating Systems • Middleware • Trading Systems Course Plan • Multi-core Resource Management • Many-core Resource Management • GPU Resource Management • Virtual Machines • Distributed File Systems • High-performance Networking • Memory Management • Network on a Chip • Embedded Real-time OS • Device Drivers • Linux Kernel Schedule 1. 2018.9.28 Introduction + Linux Kernel (Kato) 2. 2018.10.5 Linux Kernel (Chishiro) 3. 2018.10.12 Linux Kernel (Kato) 4. 2018.10.19 Linux Kernel (Kato) 5. 2018.10.26 Linux Kernel (Kato) 6. 2018.11.2 Advanced Research (Chishiro) 7. 2018.11.9 Advanced Research (Chishiro) 8. 2018.11.16 (No Class) 9. 2018.11.23 (Holiday) 10. 2018.11.30 Advanced Research (Kato) 11. 2018.12.7 Advanced Research (Kato) 12. 2018.12.14 Advanced Research (Chishiro) 13. 2018.12.21 Linux Kernel 14. 2019.1.11 Linux Kernel 15. 2019.1.18 (No Class) 16. 2019.1.25 (No Class) Linux Kernel Introducing Synchronization /* The cases for Linux */ Acknowledgement: Prof. -
Process Scheduling Ii
PROCESS SCHEDULING II CS124 – Operating Systems Spring 2021, Lecture 12 2 Real-Time Systems • Increasingly common to have systems with real-time scheduling requirements • Real-time systems are driven by specific events • Often a periodic hardware timer interrupt • Can also be other events, e.g. detecting a wheel slipping, or an optical sensor triggering, or a proximity sensor reaching a threshold • Event latency is the amount of time between an event occurring, and when it is actually serviced • Usually, real-time systems must keep event latency below a minimum required threshold • e.g. antilock braking system has 3-5 ms to respond to wheel-slide • The real-time system must try to meet its deadlines, regardless of system load • Of course, may not always be possible… 3 Real-Time Systems (2) • Hard real-time systems require tasks to be serviced before their deadlines, otherwise the system has failed • e.g. robotic assembly lines, antilock braking systems • Soft real-time systems do not guarantee tasks will be serviced before their deadlines • Typically only guarantee that real-time tasks will be higher priority than other non-real-time tasks • e.g. media players • Within the operating system, two latencies affect the event latency of the system’s response: • Interrupt latency is the time between an interrupt occurring, and the interrupt service routine beginning to execute • Dispatch latency is the time the scheduler dispatcher takes to switch from one process to another 4 Interrupt Latency • Interrupt latency in context: Interrupt! Task -
Filesystem Hierarchy Standard
Filesystem Hierarchy Standard LSB Workgroup, The Linux Foundation Filesystem Hierarchy Standard LSB Workgroup, The Linux Foundation Version 3.0 Publication date March 19, 2015 Copyright © 2015 The Linux Foundation Copyright © 1994-2004 Daniel Quinlan Copyright © 2001-2004 Paul 'Rusty' Russell Copyright © 2003-2004 Christopher Yeoh Abstract This standard consists of a set of requirements and guidelines for file and directory placement under UNIX-like operating systems. The guidelines are intended to support interoperability of applications, system administration tools, development tools, and scripts as well as greater uniformity of documentation for these systems. All trademarks and copyrights are owned by their owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Permission is granted to make and distribute verbatim copies of this standard provided the copyright and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this standard under the conditions for verbatim copying, provided also that the title page is labeled as modified including a reference to the original standard, provided that information on retrieving the original standard is included, and provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this standard into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the copyright holder. Dedication This release is dedicated to the memory of Christopher Yeoh, a long-time friend and colleague, and one of the original editors of the FHS. -
LMAX Disruptor
Disruptor: High performance alternative to bounded queues for exchanging data between concurrent threads Martin Thompson Dave Farley Michael Barker Patricia Gee Andrew Stewart May-2011 http://code.google.com/p/disruptor/ 1 Abstract LMAX was established to create a very high performance financial exchange. As part of our work to accomplish this goal we have evaluated several approaches to the design of such a system, but as we began to measure these we ran into some fundamental limits with conventional approaches. Many applications depend on queues to exchange data between processing stages. Our performance testing showed that the latency costs, when using queues in this way, were in the same order of magnitude as the cost of IO operations to disk (RAID or SSD based disk system) – dramatically slow. If there are multiple queues in an end-to-end operation, this will add hundreds of microseconds to the overall latency. There is clearly room for optimisation. Further investigation and a focus on the computer science made us realise that the conflation of concerns inherent in conventional approaches, (e.g. queues and processing nodes) leads to contention in multi-threaded implementations, suggesting that there may be a better approach. Thinking about how modern CPUs work, something we like to call “mechanical sympathy”, using good design practices with a strong focus on teasing apart the concerns, we came up with a data structure and a pattern of use that we have called the Disruptor. Testing has shown that the mean latency using the Disruptor for a three-stage pipeline is 3 orders of magnitude lower than an equivalent queue-based approach. -
A Thread Synchronization Model for the PREEMPT RT Linux Kernel
A Thread Synchronization Model for the PREEMPT RT Linux Kernel Daniel B. de Oliveiraa,b,c, R^omulo S. de Oliveirab, Tommaso Cucinottac aRHEL Platform/Real-time Team, Red Hat, Inc., Pisa, Italy. bDepartment of Systems Automation, UFSC, Florian´opolis, Brazil. cRETIS Lab, Scuola Superiore Sant'Anna, Pisa, Italy. Abstract This article proposes an automata-based model for describing and validating sequences of kernel events in Linux PREEMPT RT and how they influence the timeline of threads' execu- tion, comprising preemption control, interrupt handling and control, scheduling and locking. This article also presents an extension of the Linux tracing framework that enables the trac- ing of kernel events to verify the consistency of the kernel execution compared to the event sequences that are legal according to the formal model. This enables cross-checking of a kernel behavior against the formalized one, and in case of inconsistency, it pinpoints possible areas of improvement of the kernel, useful for regression testing. Indeed, we describe in details three problems in the kernel revealed by using the proposed technique, along with a short summary on how we reported and proposed fixes to the Linux kernel community. As an example of the usage of the model, the analysis of the events involved in the activation of the highest priority thread is presented, describing the delays occurred in this operation in the same granularity used by kernel developers. This illustrates how it is possible to take advantage of the model for analyzing the preemption model of Linux. Keywords: Real-time computing, Operating systems, Linux kernel, Automata, Software verification, Synchronization.