Towards Predictable Real-Time Performance on Multi-Core Platforms

Total Page:16

File Type:pdf, Size:1020Kb

Towards Predictable Real-Time Performance on Multi-Core Platforms Towards Predictable Real-Time Performance on Multi-Core Platforms Submitted in partial fulfillment of the requirements for the degreee of Doctor of Philosophy in Electrical and Computer Engineering Hyoseung Kim B.S., Computer Science, Yonsei University, Seoul, Korea M.S., Computer Science, Yonsei University, Seoul, Korea arXiv:1607.08578v1 [cs.DC] 28 Jul 2016 Carnegie Mellon University Pittsburgh, PA, USA June 2016 Copyright © 2016 Hyoseung Kim Keywords: Cyber-physical systems, Real-time embedded systems, Safety-critical systems, Multi-core platforms, Operating systems, Virtualization, Predictable performance. Abstract Cyber-physical systems (CPS) integrate sensing, computing, communication and actu- ation capabilities to monitor and control operations in the physical environment. A key requirement of such systems is the need to provide predictable real-time performance: the timing correctness of the system should be analyzable at design time with a quantitative metric and guaranteed at runtime with high assurance. This requirement of predictability is particularly important for safety-critical domains such as automobiles, aerospace, defense, manufacturing and medical devices. The work in this dissertation focuses on the challenges arising from the use of modern multi-core platforms in CPS. Even as of today, multi-core platforms are rarely used in safety-critical applications primarily due to the temporal interference caused by contention on various resources shared among processor cores, such as caches, memory buses, and I/O devices. Such interference is hard to predict and can significantly increase task execution time, e.g., up to 12× on commodity quad-core platforms. To address the problem of ensuring timing predictability on multi-core platforms, we develop novel analytical and systems techniques in this dissertation. Our proposed techniques theoretically bound temporal interference that tasks may suffer from when accessing shared resources. Our techniques also involve software primitives and algorithms for real-time operating systems and hypervisors, which significantly reduce the degree of the temporal interference. Specifically, we tackle the issues of cache and memory contention, locking and synchronization, interrupt handling, and access control for computational accelerators such as general-purpose graphics processing units (GPGPUs), all of which are crucial to achieving predictable real-time performance on a modern multi-core platform. Our solutions are readily applicable to commodity multi-core platforms, and can be used not only for developing new systems but also migrating existing applications from single-core to multi-core platforms. vi Acknowledgments This dissertation would have been impossible without the help and support of many people. First and foremost, I would like to thank my advisor, Prof. Raj Rajkumar. I was lucky to work with Raj. His guidance and expertise have made me a better thinker, writer, and researcher. Raj gave me opportunities to participate in exciting projects, demonstrate my research results, and mentor other students, all of which led me to become an independent researcher and to pursue an academic career. I am grateful to the members of my thesis committee, Prof. Onur Mutlu, Prof. Anthony Rowe, and Dr. Shige Wang for their time, effort and inputs in completing this dissertation. Thanks to Onur for his insights on various aspects of my work. I learned a lot from Onur on computer architectures, which was a great asset for my research. Thanks to Anthony for his feedback and advice, even since my very early days at CMU. I enjoyed lively conversations with Anthony and liked to hear his view on cyber-physical systems. Thanks to Shige for his giving me many inputs and motiving me with various practical examples. Working with Shige was a great pleasure to me. I would like to thank my research colleagues at the Software Engineering Institute (SEI): Dio de Niz, Björn Andersson, and Mark Klein. Our weekly meeting was an excellent opportunity to share lots of interesting discussions and do some good collaborative work. I also would like to thank Prof. John Lehoczky for his keen insight and wisdom during our meetings at SEI. I wish to thank the members of the CMU’s autonomous driving team: Prof. John Dolan, Jongho Lee, Tianyu Gu, Chiyu Dong, Adam Werries, Zhiding Yu, and all other former members. Their passion and efforts made me proud of being part of the team and contributing to our autonomous car. A special thanks to General Motors (GM), National Science Foundation (NSF), and the Fulbright association for funding my research. Most of my time during my doctoral studies was spent at the Real-Time and Multimedia systems Lab (RTML). Thanks to all the members of RTML who shared their time with me: Gaurav Bhatia, Karthik Lakshmanan, Arvind Kandhalu, Junsung Kim, Reza Azimi, Alexei Colin, Young-Woo Seo, Anand Bhat, Sandeep D’souza, and Shunsuke Aoki. Also, I would like to thank Toni M. Fox for her kind support on administrative work. Besides the RTML members, I am grateful to my friends at CIC: Max Buevich, Niranjini Rajagopal, Oliver Shih, Adwait Dongare, Donghyuk Lee, Sang Kil Cha, Gihyuk Ko, and Soo-Jin Moon. I am grateful to my Korean friends who I met in Pittsburgh: Sungwon Yang, Jaesok Yu, Yongjune Kim, Min Suk Kang, Minhee Jun, and Kiryong Ha. Without these people, I could not have fully enjoyed my time at CMU. I would like to thank my old buddies who are currently geographically far from me but always on my side: JongMan Koo, Jaehun Ha, Kwangkyu Park, Hwan Lee, Jungho Kim, San Yoon, Junoh Jeon, Jungmyung Kim, and Woongjung Do. I am also very grateful to Wonwoo Jung, Shinyoung Yi, Jongho Rim, and Youngbin You, for their being always supportive of me. My family has given me their endless love and support. Thanks to my parents for being my parents. My immeasurable gratitude is due to them. Thanks to my parents-in-law for their understanding me during the long years of my studies. Thanks to my brother-in-law, Taegon Lee, for his encouragement. Lastly, my thanks go to my wife, Whayoung Lee. She has been the greatest source of warmth, love and support since I met her. I would never have completed this dissertation without her. viii Contents 1 Introduction 1 1.1 Scope of This Work . .3 1.1.1 Multi-Core Platform and Shared Resources . .3 1.1.2 Tasks and Task Execution Environments . .4 1.2 Challenges with Shared Resources . .6 1.2.1 Concurrent Resources . .6 1.2.2 Mutually-Exclusive Resources . .7 1.2.3 Computational Accelerators . .8 1.3 Contributions . .9 1.3.1 Analytical and Systems Support for Concurrent Resources . .9 1.3.2 Analytical and Systems Support for Mutually-Exclusive Resources . 11 1.3.3 Analytical and Systems Support for Computational Accelerators . 12 1.4 Organization . 12 2 Background and Related Work 13 2.1 Cache Interference . 13 2.1.1 Page Coloring . 13 2.1.2 Problems with Page Coloring . 15 2.1.3 Related Work . 16 2.2 Memory Interference . 17 ix 2.2.1 DRAM Organization . 18 2.2.2 Memory Controller . 19 2.2.3 Bank Address Mapping and Bank Partitioning . 21 2.2.4 Related Work . 22 2.3 Synchronization . 24 2.3.1 Timing Penalties from Mutually-Exclusive Resources . 25 2.3.2 Related Work . 26 2.4 Interrupt Handling . 27 2.4.1 Problems with Virtual Interrupts . 28 2.4.2 Related Work . 29 2.5 GPGPU Management . 31 2.5.1 GPU Execution Pattern . 31 2.5.2 Related Work . 32 3 System Model 35 3.1 Platform Model . 36 3.2 Task Model . 37 3.3 Virtual Machine Model . 40 3.4 Other Assumptions . 41 4 Coordinated Approach for Predictable Cache Management 43 4.1 Coordinated Cache Management . 45 4.1.1 Cache Reservation . 46 4.1.2 Cache Sharing: Bounding Intra-core Penalties . 46 4.1.3 Cache Sharing: How to Share Cache Partitions . 49 4.1.4 Cache-Aware Task Allocation . 52 4.1.5 Tasks with Shared Memory Regions . 53 4.2 Evaluation . 54 4.2.1 Implementation . 54 x 4.2.2 Taskset . 56 4.2.3 Cache Reservation . 57 4.2.4 Cache Sharing . 59 4.2.5 Cache-Aware Task Allocation . 62 4.3 Summary . 64 5 Bounding and Reducing Memory Interference 65 5.1 Bounding Memory Interference Delay . 67 5.1.1 Request-Driven Bounding Approach . 69 5.1.2 Job-Driven Bounding Approach . 76 5.1.3 Response-Time Based Schedulability Analysis . 78 5.1.4 Memory Controllers with Write Batching . 79 5.1.5 Combining with Cache Interference Analysis . 80 5.2 Reducing Memory Interference via Task Allocation . 82 5.3 Evaluation . 88 5.3.1 Memory Interference in a Real Platform . 88 5.3.2 Memory Interference-Aware Task Allocation . 97 5.4 Summary . 102 6 Predictable Cache Management for Virtualization 103 6.1 Cache Control in Virtualization . 105 6.1.1 Address Translation in Virtualization . 105 6.1.2 vLLC for Coloring-aware Guest OSs . 106 6.1.3 vColoring for Coloring-unaware Guest OSs . 108 6.2 Cache Management Scheme . 110 6.2.1 Schedulability Analysis . 111 6.2.2 Allocating Cache Partitions to Tasks . 112 6.2.3 Designing a Cache-Aware VM . 113 6.2.4 Allocating Host Cache Partitions to VMs . 116 xi 6.3 Evaluation . 118 6.3.1 vLLC and vColoring . 118 6.3.2 Cache Management Scheme . 123 6.4 Summary . 126 7 Synchronization for Multi-Core Virtual Machines 129 7.1 vMPCP Framework . 131 7.1.1 Protocol Description . 132 7.1.2 VCPU Budget Overrun . 133 7.1.3 vMPCP Para-virtualization Interface . 134 7.2 vMPCP Schedulability Analysis . 135 7.2.1 VCPU Schedulability . 135 7.2.2 Task Schedulability . 138 7.3 Evaluation . 142 7.3.1 Comparison of Different Configurations .
Recommended publications
  • A Lecture Note on Csc 322 Operating System I by Dr
    A LECTURE NOTE ON CSC 322 OPERATING SYSTEM I BY DR. S. A. SODIYA 1 SECTION ONE 1.0 INTRODUCTION TO OPERATING SYSTEMS 1.1 DEFINITIONS OF OPERATING SYSTEMS An operating system (commonly abbreviated OS and O/S) is the infrastructure software component of a computer system; it is responsible for the management and coordination of activities and the sharing of the limited resources of the computer. An operating system is the set of programs that controls a computer. The operating system acts as a host for applications that are run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Operating Systems can be viewed from two points of views: Resource manager and Extended machines. From Resource manager point of view, Operating Systems manage the different parts of the system efficiently and from extended machines point of view, Operating Systems provide a virtual machine to users that is more convenient to use. 1.2 HISTORICAL DEVELOPMENT OF OPERATING SYSTEMS Historically operating systems have been tightly related to the computer architecture, it is good idea to study the history of operating systems from the architecture of the computers on which they run. Operating systems have evolved through a number of distinct phases or generations which corresponds roughly to the decades. The 1940's - First Generation The earliest electronic digital computers had no operating systems. Machines of the time were so primitive that programs were often entered one bit at a time on rows of mechanical switches (plug boards).
    [Show full text]
  • Optimizing Storage Performance with Calibrated Interrupts
    Optimizing Storage Performance with Calibrated Interrupts Amy Tai, VMware Research; Igor Smolyar, Technion — Israel Institute of Technology; Michael Wei, VMware Research; Dan Tsafrir, Technion — Israel Institute of Technology and VMware Research https://www.usenix.org/conference/osdi21/presentation/tai This paper is included in the Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. July 14–16, 2021 978-1-939133-22-9 Open access to the Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX. Optimizing Storage Performance with Calibrated Interrupts Amy Tai‡∗ Igor Smolyar†∗ Michael Wei‡ Dan Tsafrir†‡ †Technion – Israel Institute of Technology ‡VMware Research Abstract ever, creates a trade-off between request latency and the inter- After request completion, an I/O device must decide either rupt rate. For the workloads we inspected, CPU utilization in- to minimize latency by immediately firing an interrupt or to creases by as much as 55% without coalescing (Figure 12(d)), optimize for throughput by delaying the interrupt, anticipating while under even the minimum amount of coalescing, request that more requests will complete soon and help amortize the latency increases by as much as 10× for small requests, due interrupt cost. Devices employ adaptive interrupt coalescing to large timeouts. Interrupt coalescing is disabled by default heuristics that try to balance between these opposing goals. in Linux, and real deployments use alternatives (§2). Unfortunately, because devices lack the semantic information This paper addresses the challenge of dealing with expo- about which I/O requests are latency-sensitive, these heuris- nentially increasing interrupt rates without sacrificing latency.
    [Show full text]
  • Fitting Linux Device Drivers Into an Analyzable Scheduling Framework
    Fitting Linux Device Drivers into an Analyzable Scheduling Framework [Extended Abstract] ∗ Theodore P. Baker, An-I Andy Wang, Mark J. Stanovich Florida State University Tallahassee, Florida 32306-4530 [email protected], [email protected], [email protected] ABSTRACT models of the theory. More specifically, applying the theory re- quires that the system workload corresponds to models that have API extensions and performance improvements to the Linux oper- been studied, and that the system schedules the workload accord- ating system now enable it to serve as a platform for a range of ing to one of the algorithms whose performance on such workloads embedded real-time applications, using fixed-priority preemptive has been analyzed. Where a real-time system is implemented on scheduling. Powerful techniques exist for analytical verification of top of an operating system, these requirements apply to all the OS application timing constraints under this scheduling model. How- components as well as the user-level code. ever, when the application is layered over an operating system the operating system must be included in the analysis. In particular, In Linux and several other POSIX/Unix-compliant [31] operating the computational workloads due to device drivers and other in- systems, progress has been made in providing real-time constructs ternal components of the operating system, and the ways they are so that user-level programmers can write applications that adhere scheduled, need to match abstract workload models and schedul- to the theory of fixed-priority preemptive scheduling. Examples in- ing polices that are amenable to analysis. This paper assesses the clude preemptive priority-based real-time scheduling of user threads, degree to which the effects of device drivers in Linux can now be high-precision software timers, and turning off virtual memory man- modeled adequately to admit fixed-priority preemptive schedula- agement for certain memory regions.
    [Show full text]
  • Junos® OS Release 19.3R3 for the ACX Series, EX Series, MX Series, NFX Series, PTX Series, QFX Series, SRX Series, and Junos Fusion
    1 Release Notes: Junos® OS Release 19.3R3 for the ACX Series, EX Series, MX Series, NFX Series, PTX Series, QFX Series, SRX Series, and Junos Fusion 15 July 2021 Contents Introduction | 10 Junos OS Release Notes for ACX Series | 10 What's New | 11 What's New in Release 19.3R3 | 11 What's New in Release 19.3R2 | 11 What’s New in Release 19.3R1-S1 | 11 What's New in Release 19.3R1 | 12 What's Changed | 19 What’s Changed in Release 19.3R3-S1 | 20 What's Changed in Release 19.3R3 | 20 What's Changed in Release 19.3R2-S6 | 20 What's Changed in Release 19.3R2 | 21 What's Changed in Release 19.3R1 | 21 Known Limitations | 23 General Routing | 23 Open Issues | 25 General Routing | 25 Platform and Infrastructure | 27 Virtual Chassis | 27 2 Resolved Issues | 27 Resolved Issues: 19.3R3 | 28 Resolved Issues: 19.3R2 | 30 Resolved Issues: 19.3R1 | 31 Documentation Updates | 34 Migration, Upgrade, and Downgrade Instructions | 35 Upgrade and Downgrade Support Policy for Junos OS Releases | 35 Junos OS Release Notes for EX Series Switches | 37 What's New | 37 What's New in 19.3R3 | 38 What's New in 19.3R2 | 38 What's New in 19.3R1 | 39 What's Changed | 47 What's Changed in 19.3R3 | 48 What's Changed in Release 19.3R2-S6 | 49 What's Changed in 19.3R2 | 50 What's Changed in 19.3R1 | 50 Known Limitations | 51 EVPN | 52 Infrastructure | 52 Platform and Infrastructure | 52 Open Issues | 53 Authentication and Access Control | 53 Infrastructure | 53 Interfaces and Chassis | 54 Layer 2 Features | 54 Network Management and Monitoring | 54 Platform and Infrastructure
    [Show full text]
  • Tolerating Malicious Device Drivers in Linux
    Tolerating Malicious Device Drivers in Linux Silas Boyd-Wickizer and Nickolai Zeldovich MIT CSAIL ABSTRACT driver isolation techniques trust the driver not to subvert the isolation, or not to livelock the system. If attackers This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. exploit a bug in the device driver [1, 5], they can proceed Even if the device driver is controlled by a malicious to subvert the isolation mechanism and compromise the adversary, it cannot compromise the rest of the system. entire system. While some systems can provide stronger One significant challenge of fully isolating a driver is to guarantees [28, 33], they rely on the availability of a fully- trusted, precise specification of the hardware device’s confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and message- behavior, which is rarely available for devices today. signaled interrupts to confine hardware devices. SUD This paper presents the design and implementation of runs unmodified Linux device drivers, by emulating a SUD, a kernel framework that provides complete isolation Linux kernel environment in user-space. A prototype of for untrusted device drivers in Linux, without the need SUD runs drivers for Gigabit Ethernet, 802.11 wireless, for any special programming languages or specifications. sound cards, USB host controllers, and USB devices, and SUD leverages recent hardware support to implement it is easy to add a new device class. SUD achieves the general-purpose mechanisms that ensure a misbehaving same performance as an in-kernel driver on networking driver, and the hardware device that it manages, cannot benchmarks, and can saturate a Gigabit Ethernet link.
    [Show full text]
  • CS 4310: Operating Systems Lecture Notes - Student Version∗
    CS 4310: Operating Systems Lecture Notes - Student Version∗ Kyle Burke January 10, 2018 Contents -1.0 Using Chapel . 2 0 OS Basics 2 0.1 Interrupts . 2 1 Parallel Programming (using Chapel) 2 2 Hardware Threads 2 3 Concurrency 3 4 Semaphores 4 5 The Producer-Consumer Problem 4 5.1 Circular Queues . 4 6 Memory Management 6 7 Stack vs. Heap 8 7.1 Stack Management . 9 7.2 Heap Management . 9 8 Scheduling 9 8.1 Shortest-Job First . 10 8.2 First Come, First Serve . 13 8.3 Earliest-Deadline First . 14 8.4 Round Robin . 15 8.5 Hybrid Schedulers . 15 8.6 Examples . 15 9 Interrupts 20 10 File Systems 24 10.1 Fragmentation . 27 11 OS Security 31 12 History of OSes by Candace 31 ∗Created with lectureNotes.sty, which is available at: http://turing.plymouth.edu/~kgb1013/lectureNotesLatexStyle.php (or, GitHub: https://github.com/paithan/LaTeX-LectureNotes). Many or most of the answers to questions are hidden so that some of class will still be a challenge for students. 1 -1.0 Using Chapel 2 HARDWARE THREADS -1.0 Using Chapel This course was last taught with programming assignments given in Chapel1 using compiler version 1.14. This is a High-Performance Computing language designed to make parallel programming easier for computational scientists. Here are some comments about this language. • It makes launching threads and handling synchronization very easy. • It is missing many of the basic libraries that exist for common languages (e.g. Java). • Goal: focus on the 0 OS Basics 〈 Go over syllabus! 〉 Q: What are the responsibilities of an Operating System? What do they do? A: • TODO 0.1 Interrupts This material is currently in 9.
    [Show full text]
  • Shared IRQ Line Considerations AN-PM-059
    Application Note Shared IRQ Line Considerations AN-PM-059 Abstract When IRQ line-sharing between multiple devices has been imposed by the target hardware design, a system failure may occur that is intrinsic to the Linux kernel. This document outlines recommendations to avoid such issues. Several solutions have been identified and each should be considered on its merits for the target platform under examination. AN-PM-059 Shared IRQ Line Considerations Contents Abstract ................................................................................................................................................ 1 Contents ............................................................................................................................................... 2 Figures .................................................................................................................................................. 2 1 Terms and Definitions ................................................................................................................... 3 2 References ..................................................................................................................................... 3 3 Introduction.................................................................................................................................... 4 4 Shared Interrupt Line .................................................................................................................... 4 4.1 Symptoms and Mode of Failure in the Linux
    [Show full text]
  • Towards Predictable Real-Time Performance on Multi-Core Platforms
    Towards Predictable Real-Time Performance on Multi-Core Platforms Submitted in partial fulfillment of the requirements for the degreee of Doctor of Philosophy in Electrical and Computer Engineering Hyoseung Kim B.S., Computer Science, Yonsei University, Seoul, Korea M.S., Computer Science, Yonsei University, Seoul, Korea Carnegie Mellon University Pittsburgh, PA, USA June 2016 Copyright © 2016 Hyoseung Kim Keywords: Cyber-physical systems, Real-time embedded systems, Safety-critical systems, Multi-core platforms, Operating systems, Virtualization, Predictable performance. Abstract Cyber-physical systems (CPS) integrate sensing, computing, communication and actu- ation capabilities to monitor and control operations in the physical environment. A key requirement of such systems is the need to provide predictable real-time performance: the timing correctness of the system should be analyzable at design time with a quantitative metric and guaranteed at runtime with high assurance. This requirement of predictability is particularly important for safety-critical domains such as automobiles, aerospace, defense, manufacturing and medical devices. The work in this dissertation focuses on the challenges arising from the use of modern multi-core platforms in CPS. Even as of today, multi-core platforms are rarely used in safety-critical applications primarily due to the temporal interference caused by contention on various resources shared among processor cores, such as caches, memory buses, and I/O devices. Such interference is hard to predict and can significantly increase task execution time, e.g., up to 12× on commodity quad-core platforms. To address the problem of ensuring timing predictability on multi-core platforms, we develop novel analytical and systems techniques in this dissertation.
    [Show full text]
  • Tolerating Malicious Device Drivers in Linux
    Tolerating Malicious Device Drivers in Linux The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Boyd-Wickizer, Silas and Nickolai Zeldovich. "Tolerating Malicious Device Drivers in Linux" USENIX Annual Technical Conference, June 23–25, 2010, Boston, MA, USA. As Published http://www.usenix.org/events/atc10/tech/full_papers/Boyd- Wickizer.pdf Publisher USENIX Association Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/62238 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/ Tolerating Malicious Device Drivers in Linux Silas Boyd-Wickizer and Nickolai Zeldovich MIT CSAIL ABSTRACT driver isolation techniques trust the driver not to subvert the isolation, or not to livelock the system. If attackers This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. exploit a bug in the device driver [1, 5], they can proceed Even if the device driver is controlled by a malicious to subvert the isolation mechanism and compromise the adversary, it cannot compromise the rest of the system. entire system. While some systems can provide stronger One significant challenge of fully isolating a driver is to guarantees [28, 33], they rely on the availability of a fully- trusted, precise specification of the hardware device’s confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and message- behavior, which is rarely available for devices today. signaled interrupts to confine hardware devices. SUD This paper presents the design and implementation of runs unmodified Linux device drivers, by emulating a SUD, a kernel framework that provides complete isolation Linux kernel environment in user-space.
    [Show full text]
  • 111I.®@@ , Iw J , J '5 Expected 5 Expected 3 a Return to Timer :Ltimer Firings Normal Mode Duration W '5 L Expected 7 Timer Firings,Y Received 3
    US008533709B2 (12) United States Patent (10) Patent No.: US 8,533,709 B2 Nicholas et al. (45) Date of Patent: Sep. 10, 2013 (54) VIRTUALCHANGINGPROGRAMMABLE MACHINES FREQUENCY INTERRUPT T0 CONTROL OFAVIRTUAL TIMER IN 200336614073232006/0090092 A1 * 4/2006 Verhulstgakdiinentetlai‘if n e ......................a ' "" ~~~~~~~~~~~~ " ~~.. 713/400 VIRTUAL TIME OTHER PUBLICATIONS . Mock et a1. “Continuous clock s chroniZation in Wireless real-time (75) Inventors: Andrew Ernest Nicholas’ Beinevue’ WA applications”, Reliable Distributzid1 Systems, 2000, p. 125-132).* (Us); Rene Anton“) Vega’ Klrkland’ WA Cristian et a1. (“Clock Synchronization in the Presence of Omission (Us) and Performance Failures, and Processor Joins”, IBM Research, 16th IEEE Int. Symp. On Fault-tolerant Computing Systems, Vienna, Jul. (73) Assignee: Microsoft Cororation, Redmond, WA 19g6),* (US) Honeycutt, J., “Microsoft® Virtual PC”, Microsoft®, Nov. 2003, 1-27. ( * ) Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 * Cited by examiner U.S.C. 154(b) by 1649 days. Primary Examiner * Meng An (21) Appl. No.1 11/197,614 Assistant Examiner * Eric C Wai (74) Attorney, Agent, or Firm * Woodcock Washburn, LLP (22) Filed: Aug. 4, 2005 (57) ABSTRACT (65) Prior Publication Data . A catch-up mode that runs a v1rtua1 programmable rnterrupt US 2007/0033589 A1 Feb. 8, 2007 timer faster than a nominal rate to prevent time loss in a virtual machine can be implemented. If time loss is determined, a (51) Int- Cl- catch-up mode can be initiated to cause increased ?rings, G06F 9/45 5 (2006-01) beyond a nominal rate, of the programmable interrupt timer to (52) US Cl- adjust the clock of the virtual machine to the clock of the host USPC ...........................................................
    [Show full text]
  • Interrupt Storm Detection" Feature
    Development of "Interrupt Storm Detection" Feature October 2020 Issued by Kento Kobayashi, R&D Center, Sony Corporation Copyright 2020 Sony Corporation Agenda • Background • What is interrupt storm? • Cases of interrupt storms • Existing ways to debug interrupt storms for each cases • Our solution • Interrupt storm detection feature • Example of using interrupt storm detection feature for actual problem 2 26.Oct.2020 R&D Center, Sony Corporation Self introduction • Name • Kento Kobayashi • Company • Sony Corporation • Responsible for • Linux kernel and device drivers for Sony products. 3 26.Oct.2020 R&D Center, Sony Corporation Background 4 26.Oct.2020 R&D Center, Sony Corporation What is “Interrupt Storm”? • “Interrupt Storm” is a continuous hardware interrupt to CPU. • CPU needs to execute interrupt handlers continuously. • “Interrupt Storm” causes: • System hang-up due to high CPU utilization by the interrupt handler • Difficult to debug because console is not responding • To debug interrupt storm: # of Interrupts • Need to identify IRQ number which causes interrupt storm. • Cases of “Interrupt Storm”: • Case1 : Unhandled(Spurious) interrupt • Case2 : High-frequency handled interrupt time 5 26.Oct.2020 R&D Center, Sony Corporation Case1 : Unhandled(Spurious) interrupt • What is “Unhandled(Spurious) interrupt”? • Interrupt handler doesn’t handle hardware interrupt • Why “Unhandled(Spurious) interrupt” occur? • Problem of device driver. • Interrupt handler do nothing if that interrupt is not own interrupt. • Then interrupt status is not clear, so interrupt is raised continuously. • Example of “Unhandled(Spurious) interrupt” case • Shared IRQ by multiple device driver. • Interrupt handler is executed whether not own interrupt. • Then if interrupt handler not recognize as own interrupt wrongly, nobody handled raised interrupt.
    [Show full text]
  • Comparing and Improving Current Packet Capturing Solutions Based on Commodity Hardware
    Comparing and Improving Current Packet Capturing Solutions based on Commodity Hardware Lothar Braun, Alexander Didebulidze, Nils Kammenhuber, Georg Carle Technische Universität München Institute for Informatics Chair for Network Architectures and Services {braun,didebuli,kammenhuber,carle}@net.in.tum.de ABSTRACT tured by Endace [1]—was mandatory for capturing Gigabit Capturing network traffic with commodity hardware has be- or Multi-gigabit network traffic, if little or no packet loss was come a feasible task: Advances in hardware as well as soft- a requirement. With recent development progresses in bus ware have boosted off-the-shelf hardware to performance lev- systems, multi-core CPUs and commodity network cards, els that some years ago were the domain of expensive special- nowadays off-the-shelf hardware can be used to capture net- purpose hardware. However, the capturing hardware still work traffic at near wire-speed with little or no packet loss needs to be driven by a well-performing software stack in in 1 GE networks, too [2, 3]. People are even building mon- order to minimise or avoid packet loss. Improving the cap- itoring devices based on commodity hardware that can be turing stack of Linux and FreeBSD has been an extensively used to capture traffic in 10 GE networks [4, 5] covered research topic in the past years. Although the ma- However, this is not an easy task, since it requires careful jority of the proposed enhancements have been backed by configuration and optimization of the hardware and software evaluations, these have mostly been conducted on different components involved—even the best hardware will suffer hardware platforms and software versions, which renders a packet loss if its driving software stack is not able to handle comparative assessment of the various approaches difficult, the huge amount of network packets.
    [Show full text]