Runtime Verification

Total Page:16

File Type:pdf, Size:1020Kb

Runtime Verification Runtime Verification Carl Martin Rosenberg INF5140 June 2015 Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 1 / 27 Plan In broad terms, I hope to answer the following questions: What is Runtime Verification? Why is it useful? How does Runtime Verification compare to traditional Model Checking? How are the things we have learned in INF5140 used in Runtime Verification? After that, I will do a short live demo on how to use a tool called DTrace for Runtime Verification. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 2 / 27 What is Runtime Verification? Runtime Verification is an emerging research field with strong ties to Model Checking. Has had its own workshop series since 20011. As a first approximation, it is concerned with checking software based on data from actual runs of the software. Borrows many formalisms and methods from Model Checking, notably LTL. 1See http://www.runtime-verification.org/ Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 3 / 27 What is Runtime Verification? More precisely, let us follow[6][p. 36] and use the following definition: Runtime verification is the discipline of computer science that deals with the study, development and application of those verification techniques that allow checking whether a run of a system under scrutiny . satisfies or violates a given correctness property Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 4 / 27 What is Runtime Verification? More precisely, let us follow[6][p. 36] and use the following definition: Runtime verification is the discipline of computer science that deals with the study, development and application of those verification techniques that allow checking whether a run of a system under scrutiny . satisfies or violates a given correctness property Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 5 / 27 Unpacking the definition 1 We can understand verification as \[A]ll techniques suitable for showing that a system satisfies its specification”[7][p. 294] 2 What about a run? Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 6 / 27 The concept of a run In Runtime Verification, a run is typically represented by some log or trace. The trace can either represent a sequence of program states, or a series of events representing the program behavior (I/O, syscalls). Let's consider these two options more closely. [7][p. 294] Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 7 / 27 Traces as a sequence of program states One notion of a trace is that of a sequence of program states. This notion of a run is similar to the notion of a computation in traditional Model Checking[4][p. 13] If we had a graph that modeled the program, where each node represented a program state (ie. set of variable assignments), a run is a (possibly looping) path in this graph. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 8 / 27 Traces as a sequence of program behaviors Another notion of a trace is that of a behavior history. In this variety, we treat the program as a black box[7][p. 295], and conduct the verification based only on what we can see \from the outside", by analyzing the program I/O or inpecting the interaction between the program and the operating system. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 9 / 27 Two important differences from Model Checking The preceeding discussion of traces highlight two important differences between traditional Model Checking and Runtime Verification: 1 In Runtime Verification, we check if a property holds for a single run of the system. Model Checking typically checks all possible runs over the program model. [7][p. 294] 2 While Model Checking can operate with infinite runs, Runtime Verification is restricted to finite runs. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 10 / 27 All traces in Runtime Verification are finite Traditional Model Checking often want to analyze programs that ideally should run forever (ie. servers, core infrastructure). Therefore traditional Model Checking allows for (and often presumes) infinite runs. Runtime Verification, however, bases itself on real-world runs of systems, and these are by necessity finite. This has some implications for how we use LTL in Runtime Verification, as we shall see. [7][p. 294-295] Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 11 / 27 From Runs to Monitors Having humans conduct runtime verification by inspecting traces would be error-prone and infeasible. We want to make the computer analyze the traces in a systematic manner. This brings up the concept of a Monitor. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 12 / 27 Monitor A monitor is a device that reads a finite trace and yields a certain verdict[7][p. 294] A verdict is some value of some truth domain, ie. ftrue; falseg. The monitor is essentially a decider for the property in question. Figuring out how to create a monitor for a given specification is a central theme in Runtime Verification. Martin Leucker calls it \the distinguishing research effort"[6][p. 36] of Runtime Verirication. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 13 / 27 The basic picture Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 14 / 27 Offline versus online monitoring If the monitor consumes the trace while the program still runs, we call this online monitoring. On the other hand, if the trace is consumed after the execution has finished, we call it offline monitoring. [7][p. 295] Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 15 / 27 Online monitoring Online monitoring is especially interesting because it opens up the possibility for not only observing the program, but reacting if something bad happens [7]. However, online monitoring also requires us to be extra careful: We should not let the act of monitoring the program interfere excessively with the program execution. Hence, online monitoring makes it even more important to create monitors that 1 Use as little memory as possible 2 Use as little CPU time as possible Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 16 / 27 Criteria for good monitors: Impartiality and Anticipation Online monitoring requires monitors that are time and space–efficient. Also, since Runtime Verification works on finite traces, we need to ensure that 1 No monitor gives a verdict based on incomplete information. 2 As soon as sufficient information is obtained, the monitor should give a verdict. Martin Leucker[7][p. 295] calls these impartiality and anticipation, respectively. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 17 / 27 Achiving impartiality: Going beyond true and false If our monitor can only output true or false, we might get misleading results [7][p. 295]. Consider the LTL formula :p, which reads "no state satisfying p should occur". If a p is observed, the monitor should yield "false". However, as long as p is not observed, one should not say "true", because p might occur sometime in the future. In this case, it would be better for the monitor to report inconclusive [7][p. 297]. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 18 / 27 Achiving impartiality: Going beyond true and false Another example: Consider the LTL formula ♦p - which reads "eventually a p is observed". In this case, we should only report "true" when we find a p. Otherwise, we the monitor should say "inconclusive". [7][p. 297]. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 19 / 27 Achiving impartiality: Going beyond true and false In [7], Martin Leucker presents a variety of LTL that incorporates "inconclusive" into the semantics of the logic. He calls the resulting logic LTL3. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 20 / 27 LTL3 Leucker defines the semantics of LTL3 is defined as follows: if there is no continuation of u satisfying φ (considered as an LTL formula), the value of φ is false; if every continuation of u satisifes φ (considered as an LTL formula), it is true; otherwise, the value is inconclusive since the observations so far are inconclusive, and neither true or false can be determined. [7][p. 297] Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 21 / 27 Limitations of standard LTL There are many things we wish to verify, that cannot easily be verified in LTL. Suppose we want to verify that every opened file is eventually closed. As a first approximation, we could say (open ! Fclose). This gets close, but only says that "there should be a close for every open". Typically, we want a way to say that if file x is opened, file x gets closed. [7][p. 298] Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 22 / 27 The file descriptor example Volker and a former student of his, Eric Bodden, developed a way to enhance LTL to express this property2 In addition to the formalism for expressing the property, they developed a way to extract information from the running program using Aspect-oriented programming on the Java Virtual Machine. I will end by demonstrating how something similar can be achieved using DTrace. 2The paper cited in [7][p. 298] is [8]. See also [1]. Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 23 / 27 DTrace in one slide DTrace is an operating system technology originally developed for troubleshooting performance issues [3], [2], [5]. It can be used to harvest all kinds of data: 1 Information about the inner workings of a user program 2 Information about the syscalls a program executes 3 Detailed information about what goes on in the kernel 4 Much, much more. In my master thesis, I explore how DTrace can be used for Runtime Verification. Let's see it in action! Carl Martin Rosenberg (INF5140) Runtime Verification June 2015 24 / 27 BibliographyI Eric Bodden.
Recommended publications
  • 1 Thinking Methodically About Performance
    PERFORMANCE Thinking Methodically about Performance The USE method addresses shortcomings in other commonly used methodologies Brendan Gregg, Joyent Performance issues can be complex and mysterious, providing little or no clue to their origin. In the absence of a starting point—or a methodology to provide one—performance issues are often analyzed randomly: guessing where the problem may be and then changing things until it goes away. While this can deliver results—if you guess correctly—it can also be time-consuming, disruptive, and may ultimately overlook certain issues. This article describes system-performance issues and the methodologies in use today for analyzing them, and it proposes a new methodology for approaching and solving a class of issues. Systems-performance analysis is complex because of the number of components in a typical system and their interactions. An environment may be composed of databases, Web servers, load balancers, and custom applications, all running upon operating systems—either bare-metal or virtual. And that’s just the software. Hardware and firmware, including external storage systems and network infrastructure, add many more components to the environment, any of which is a potential source of issues. Each of these components may require its own field of expertise, and a company may not have staff knowledgeable about all the components in its environment. Performance issues may also arise from complex interactions between components that work well in isolation. Solving this type of problem may require multiple domains of expertise to work together. As an example of such complexity within an environment, consider a mysterious performance issue we encountered at Joyent for a cloud-computing customer: the problem appeared to be a memory leak, but from an unknown location.
    [Show full text]
  • And It All Went Horribly Wrong: Debugging Production Systems
    And It All Went Horribly Wrong: Debugging Production Systems Bryan Cantrill VP, Engineering [email protected] @bcantrill Thursday, November 17, 2011 In the beginning... Thursday, November 17, 2011 In the beginning... Sir Maurice Wilkes, 1913 - 2010 Thursday, November 17, 2011 In the beginning... “As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life from then on was going to be spent in finding mistakes in my own programs.” —Sir Maurice Wilkes, 1913 - 2010 Thursday, November 17, 2011 Debugging through the ages • As systems had more and more demands placed upon them, we became better at debugging their failures... • ...but as these systems were replaced (disrupted) by faster (cheaper) ones, debuggability often regressed • At the same time, software has been developed at a higher and higher layer of abstraction — and accelerated by extensive use of componentization • The high layers of abstraction have made it easer to get the system initially working (develop) — but often harder to understand it when it fails (deploy + operate) • Production systems are more complicated and less debuggable! Thursday, November 17, 2011 So how have we made it this far? • We have architected to survive component failure • We have carefully considered state — leaving tiers of the architecture stateless wherever possible • Where we have state, we have carefully considered semantics,
    [Show full text]
  • The Rise & Development of Illumos
    Fork Yeah! The Rise & Development of illumos Bryan Cantrill VP, Engineering [email protected] @bcantrill WTF is illumos? • An open source descendant of OpenSolaris • ...which itself was a branch of Solaris Nevada • ...which was the name of the release after Solaris 10 • ...and was open but is now closed • ...and is itself a descendant of Solaris 2.x • ...but it can all be called “SunOS 5.x” • ...but not “SunOS 4.x” — thatʼs different • Letʼs start at (or rather, near) the beginning... SunOS: A peopleʼs history • In the early 1990s, after a painful transition to Solaris, much of the SunOS 4.x engineering talent had left • Problems compounded by the adoption of an immature SCM, the Network Software Environment (NSE) • The engineers revolted: Larry McVoy developed a much simpler variant of NSE called NSElite (ancestor to git) • Using NSElite (and later, TeamWare), Roger Faulkner, Tim Marsland, Joe Kowalski and Jeff Bonwick led a sufficiently parallelized development effort to produce Solaris 2.3, “the first version that worked” • ...but with Solaris 2.4, management took over day-to- day operations of the release, and quality slipped again Solaris 2.5: Do or die • Solaris 2.5 absolutely had to get it right — Sun had new hardware, the UltraSPARC-I, that depended on it • To assure quality, the engineers “took over,” with Bonwick installed as the gatekeeper • Bonwick granted authority to “rip it out if itʼs broken" — an early BDFL model, and a template for later generations of engineering leadership • Solaris 2.5 shipped on schedule and at quality
    [Show full text]
  • Introducing a New Product
    illumos SVOSUG Update Presented by Garrett D'Amore Nexenta Systems, Inc. August 26, 2010 What's In A Name? illumos = illum + OS = “Light + OS” Light as in coming from the Sun... OS as in Operating System Note: illumos not Illumos or IllumOS “illumos” trademark application in review. Visual branding still under consideration. Not All of OpenSolaris is Open Source ● Critical components closed source – libc_i18n (needed for working C library) – NFS lock manager – Portions of crypto framework – Numerous critical drivers (e.g. mpt) ● Presents challenges to downstream dependents – Nexenta, Belenix, SchilliX, etc. – See “Darwin” and “MacOS X” for the worst case What's Good ● The Technology! – ZFS, DTrace, Crossbow, Zones, etc. ● The People – World class engineers! – Great community of enthusiasts – Vibrant ecosystem ● The Code is Open – Well most of it, at least illumos – the Project ● Derivative (child) of OS/Net (aka ON) – Solaris/OpenSolaris kernel and foundation – 100% ABI compatible with Solaris ON – Now a real fork of ON, but will merge when code available from Oracle ● No closed code – Open source libc, kernel, and drivers! ● Repository for other “experimental” innovations – Can accept changes from contributors that might not be acceptable to upstream illumos – the Ecosystem ● illumos-gate is just ON – Focused on “Core Foundation Blocks” – Flagship project ● Expanding to host other affiliated projects – Umbrella organization – X11 components? – Desktop components? – C++ Runtime? – Distributions? illumos – the Community ● Stands independently
    [Show full text]
  • Building Secure and Reliable Systems
    Building Secure & Reliable Systems Best Practices for Designing, Implementing and Maintaining Systems Compliments of Heather Adkins, Betsy Beyer, Paul Blankinship, Piotr Lewandowski, Ana Oprea & Adam Stubblefi eld Praise for Building Secure and Reliable Systems It is very hard to get practical advice on how to build and operate trustworthy infrastructure at the scale of billions of users. This book is the first to really capture the knowledge of some of the best security and reliability teams in the world, and while very few companies will need to operate at Google’s scale many engineers and operators can benefit from some of the hard-earned lessons on securing wide-flung distributed systems. This book is full of useful insights from cover to cover, and each example and anecdote is heavy with authenticity and the wisdom that comes from experimenting, failing and measuring real outcomes at scale. It is a must for anybody looking to build their systems the correct way from day one. —Alex Stamos, Director of the Stanford Internet Observatory and former CISO of Facebook and Yahoo This book is a rare treat for industry veterans and novices alike: instead of teaching information security as a discipline of its own, the authors offer hard-wrought and richly illustrated advice for building software and operations that actually stood the test of time. In doing so, they make a compelling case for reliability, usability, and security going hand-in-hand as the entirely inseparable underpinnings of good system design. —Michał Zalewski, VP of Security Engineering at Snap, Inc. and author of The Tangled Web and Silence on the Wire This is the “real world” that researchers talk about in their papers.
    [Show full text]
  • [Cs.SE] 22 Sep 2003 Postmortem Object Type Identification
    AADEBUG2003 71 Postmortem Object Type Identification Bryan M. Cantrill∗ ∗ Sun Microsystems, 17 Network Circle, Menlo Park, California ABSTRACT This paper presents a novel technique for the automatic type identification of arbitrary memory objects from a memory dump. Our motivating application is debugging memory corruption problems in optimized, pro- duction systems — a problem domain largely unserved by extant methodologies. We describe our algorithm as applicable to any typed language, and we discuss it with respect to the formidable obstacles posed by C. We describe the heuristics that we have developed to overcome these difficulties and achieve effective type identification on C-based systems. We further describe the implementation of our heuristics on one C- based system — the Solaris operating system kernel — and describe the extensions that we have added to the Solaris postmortem debugger to allow for postmortem type identification. We show that our implemen- tation yields a sufficiently high rate of type identification to be useful for debugging memory corruption problems. Finally, we discuss some of the novel automated debugging mechanisms that can be layered upon postmortem type identification. KEYWORDS: postmortem debugging; memory corruption; debugging production systems; debugging opti- mized systems; false sharing; lock detection; feedback-based debugging 1 Introduction While there are a myriad of different techniques for automatically debugging memory corruption problems, they share one conspicuous trait: each induces a negative effect on run-time performance. arXiv:cs/0309037v1 [cs.SE] 22 Sep 2003 In the least invasive techniques the effect is merely moderate, but in many it is substantial — and in none is the performance effect so slight as to allow the technique to be enabled at all times in production code.
    [Show full text]
  • Download Slides
    Dynamic Languages In Production: Progress And Open Challenges Bryan Cantrill (@bcantrill) David Pacheco (@dapsays) Joyent Dynamic languages: In the beginning... 2 Dynamic languages: In the beginning... John McCarthy, 1927 - 2011 3 Dynamic languages: In the beginning... “The existence of an interpreter and the absence of declarations makes it particular natural to use LISP in a time-sharing environment. It is convenient to define functions, test them, and re-edit them without ever leaving the LISP interpreter.” — John McCarthy, 1927 - 2011 4 Dynamic languages • From their inception, dynamic and interpreted languages have enabled higher programmer productivity • ...but for many years, limited computing speed and memory capacity confined the real-world scope of these languages • By the 1990s, with faster microprocessors, better DRAM density and improved understanding of virtual machine implementation, the world was ready for a breakout dynamic language... • Java, introduced in 1995, quickly became one of the world’s most popular languages — and in the nearly two decades since Java, dynamic languages more generally have blossomed • Dynamic languages have indisputable power… • ...but their power has a darker side 5 Before the beginning 6 Before the beginning Sir Maurice Wilkes, 1913 - 2010 7 Before the beginning “As soon as we started programming, we found to our surprise that it wasn't as easy to get programs right as we had thought. Debugging had to be discovered. I can remember the exact instant when I realized that a large part of my life
    [Show full text]
  • Dtrace: Dynamic Tracing for Solaris
    Solaris RAS/Performance DTrace: Dynamic Tracing For Solaris Bryan Cantrill Mike Shapiro (bmc@eng) (mws@eng) Solaris Kernel Technologies DTrace: Cantrill, Shapiro 3/11/02 Sun Proprietary/Confidential: Internal Use Only 1 Solaris RAS/Performance DTrace: Dynamic Tracing For Solaris A Bryan Cantrill Mike Shapiro PS P P O (bc30992@japan) (ms36066@sfbay) R T O I V E Solaris Kernel Technologies D DTrace: Cantrill, Shapiro 3/11/02 Sun Proprietary/Confidential: Internal Use Only 2 A Modern Tracing Framework " Must have zero probe effect when disabled " Must allow for novel tracing technologies " Must allow for thousands of probes " Must allow arbitrary numbers of consumers " Unwanted data must be pruned as early as possible in the data chain " Data must be coalesced whenever possible, and as early as possible DTrace: Cantrill, Shapiro 3/11/02 Sun Proprietary/Confidential: Internal Use Only 3 The DTrace Vision " Build a tracing framework that provides concise answers to arbitrary questions " Enable quantum leap in performance analysis and engineering " Improve RAS through continuous tracing " Accelerate project development " Eliminate DEBUG and other special kernels: all facilities available in production DTrace: Cantrill, Shapiro 3/11/02 Sun Proprietary/Confidential: Internal Use Only 4 IBM MVS Tracing " MVS provided wealth of tracing facilities, notably GTF and CTRACE " IPCS console provided commands to enable, filter, and display GTF, CTRACE trace records " Extensive probes provided for base operating system, channel programs " GTRACE() assembler macro used to record data in a user program; can later be merged DTrace: Cantrill, Shapiro 3/11/02 Sun Proprietary/Confidential: Internal Use Only 5 GTF Example " Operator console: START GTF.EXAMPLE1 AHL103I TRACE OPTIONS SELECTED−−SYSM,USR,DSP 00 AHL125A RESPECIFY TRACE OPTIONS OR REPLY U REPLY 00,U AHL031I GTF INITIALIZATION COMPLETE " IPCS GTFTRACE output: DSP ASCB 00F44680 CPU 001 PSW 070C1000 TCB 00AF2370 R15 80AF2858 R0 00000001 R1 FDC9E5D4 GMT−07/02/89 00:29:08.155169 DTrace:..
    [Show full text]
  • Bryan Cantrill and Jeffbonwick, Sun Microsystems
    Bryan Cantrill and Jeff Bonwick, Sun Microsystems 16 September 2008 ACM QUEUE rants: [email protected] Chances are you won’t actually have to write multithreaded code. But if you do, some key principles will help you master this “black art.” Real-world CONCURRENCY oftware practitioners today could be forgiven (if not some hard-won wisdom) into a discussion that has if recent microprocessor developments have too often descended into hysterics. Specifically, we hope given them some trepidation about the future to answer the essential question: what does the prolif- of software. While Moore’s law continues to eration of concurrency mean for the software that you hold (that is, transistor density continues to develop? Perhaps regrettably, the answer to that question double roughly every 18 months), as a result of is neither simple nor universal—your software’s relation- both intractable physical limitations and prac- ship to concurrency depends on where it physically Stical engineering considerations, that increasing density executes, where it is in the stack of abstraction, and the is no longer being spent on boosting clock rate. Instead, it business model that surrounds it. is being used to put multiple CPU cores on a single CPU Given that many software projects now have compo- die. From the software perspective, this is not a revolu- nents in different layers of the abstraction stack spanning tionary shift, but rather an evolutionary one: multicore different tiers of the architecture, you may well find that CPUs are not the birthing of a new paradigm, but rather even for the software that you write, you do not have one the progression of an old one (multiprocessing) into answer but several: you may be able to leave some of your more widespread deployment.
    [Show full text]
  • Hidden in Plain Sight - ACM Queue
    Hidden in Plain Sight - ACM Queue http://queue.acm.org/detail.cfm?id=1117401 ACM Queue Architecting Tomorrow's Computing Why Join ACM? Hidden in Plain Sight view issue by Bryan Cantrill | February 23, 2006 Topic: Performance HIDDEN IN PLAIN SIGHT RECENTLY ON SLASHDOT IMPROVEMENTS IN THE OBSERVABILITY OF SOFTWARE CAN HELP YOU DIAGNOSE YOUR MOST CRIPPLING PERFORMANCE PROBLEMS. - Communications Surveillance: Privacy BRYAN CANTRILL, SUN MICROSYSTEMS and Security In December 1997, Sun Microsystems had just announced its new flagship machine: a 64-processor symmetric - Making Sense of Revision Control multiprocessor supporting up to 64 gigabytes of memory and thousands of I/O devices. As with any new Systems machine launch, Sun was working feverishly on benchmarks to prove the machine’s performance. While the - Privacy, Mobile Phones and Ubiquitous benchmarks were generally impressive, there was one in particular—an especially complicated benchmark Data Collection involving several machines—that was exhibiting unexpectedly low performance. The benchmark machine—a fully racked-out behemoth with the maximum configuration of 64 processors—would occasionally become RELATED CONTENT mysteriously distracted: Benchmark activity would practically cease, but the operating system kernel remained furiously busy. After some number of minutes spent on unknown work, the operating system would suddenly MODERN PERFORMANCE MONITORING right itself: Benchmark activity would resume at full throttle and run to completion. Those running the The modern Unix server floor can be a benchmark could see that the machine was on course to break the world record, but these minutes-long diverse universe of hardware from periods of unknown kernel activity were enough to be the difference between first and worst.
    [Show full text]
  • Participatory Motivations and Determinants of Success in Open Source Software
    Participatory Motivations and Determinants of Success in Open Source Software Sarah Rittgers Thesis Advisor: Patrick Cozzi EAS 499 Senior Capstone Thesis Department of Computer and Information Science School of Engineering and Applied Science The University of Pennsylvania May 1st, 2019 Acknowledgements I would like to thank Jean Paoli, former President of Microsoft Open Technologies and currently the founder of Docugami, and Gabby Getz, a Software Developer at Cesium, for agreeing to take time out of their schedules to be interviewed. Their input provided critical real-world information that shaped the direction of this thesis. I would also like to give special thanks to Patrick Cozzi, a Principal Graphics Architect at Analytical Graphics, creator of Cesium, and lecturer at the University of Pennsylvania. During the spring 2019 semester, he guided my thesis from conception to its final state and was always willing to offer advice, valuable sources, and introductions to experts in the field. 2 Table of Contents 1. INTRODUCTION ................................................................................................................................................... 5 2. BACKGROUND ...................................................................................................................................................... 6 2.1. TERMINOLOGY .................................................................................................................................................. 6 a. Open Source ...................................................................................................................................................
    [Show full text]
  • Erlang + Dtrace = ? • Erlang VM Architecture • Current Erlang Dtrace Scope • Erlang-Dtrace Demo • Questions
    Erlang-DTrace Garry Bulmer Team DTrace: Tim Becker Copyright Garry Bulmer 2008 What I'm going to talk about • Introduction to DTrace & DTrace Architecture • Demo of DTrace with ‘one liners’ • Erlang + Dtrace = ? • Erlang VM Architecture • Current Erlang DTrace Scope • Erlang-DTrace Demo • Questions Copyright Garry Bulmer 2008 What is DTrace? “DTrace is a comprehensive dynamic tracing facility ... that can be used by administrators and developers on live production systems to examine the behavior of both user programs and of the operating system itself. DTrace enables you to explore your system to understand how it works, track down performance problems across many layers of software, or locate the cause of aberrant behavior. DTrace lets you create your own custom programs to dynamically instrument the system and provide immediate, concise answers to arbitrary questions” Source: Sun Microsystems “Solaris Dynamic Tracing Guide” Copyright Garry Bulmer 2008 How does DTrace work? • KEY: Dynamically enabled probes - ‘safe’ for Production • Probes observe function entry, exit & parameters • Probes observe events, and capture data • Probes in OS kernel ≈ ‘Zero cost’ when disabled * • ‘Providers’ - subsystem managing a group of Probes • Providers forward events and data to ‘D programs’ • ‘PID’ Provider observes User applications * SUN say cost < 0.5% Copyright Garry Bulmer 2008 When is DTrace Useful? Browser Web Server Application Server Database Server Copyright Garry Bulmer 2008 DTrace End-to-End myprog.d Firefox Apache JVM MySQL Javascript
    [Show full text]