Process Synchronization

Total Page:16

File Type:pdf, Size:1020Kb

Process Synchronization CHAPTER5 Process Synchronization Now that we have provided a grounding in synchronization theory, we can describe how Java synchronizes the activity of threads, allowing the programmer to develop generalized solutions to enforce mutual exclusion between threads. When an application ensures that data remain consistent even when accessed concurrently by multiple threads, the application is said to be thread-safe. 5.1 Bounded Buffer In Chapter 3, we described a shared-memory solution to the bounded-buffer problem. This solution suffers from two disadvantages. First, both the producer and the consumer use busy-waiting loops if the buffer is either full or empty. Second, the variable count, which is shared by the producer and the consumer, may develop a race condition, as described in Section Section 5.1. This section addresses these and other problems while developing a solution using Java synchronization mechanisms. 5.1.1 Busy Waiting and Livelock Busy waiting was introduced in Section 5.6.2, where we examined an imple- mentation of the acquire() and release() semaphore operations. In that section, we described how a process could block itself as an alternative to busy waiting. One way to accomplish such blocking in Java is to have a thread call the Thread.yield() method. Recall from Section Section 6.1 that, when a thread invokes the yield() method, the thread stays in the runnable state but allows the JVM to select another runnable thread to run. The yield() method makes more effective use of the CPU than busy waiting does. In this instance, however, using either busy waiting or yielding may lead to another problem, known as livelock. Livelock is similar to deadlock; both prevent two or more threads from proceeding, but the threads are unable to proceed for different reasons. Deadlock occurs when every thread in a set is blocked waiting for an event that can be caused only by another blocked thread in the set. Livelock occurs when a thread continuously attempts an action that fails. 143 144 Chapter 5 Process Synchronization // Producers call this method public synchronized void insert(E item) { while (count == BUFFER SIZE) Thread.yield(); buffer[in] = item; in = (in + 1) % BUFFER SIZE; ++count; } // Consumers call this method public synchronized E remove() { E item; while (count == 0) Thread.yield(); item = buffer[out]; out = (out + 1) % BUFFER SIZE; --count; return item; } Figure 5.1 Synchronized insert() and remove() methods. Here is one scenario that could cause livelock. Recall that the JVM schedules threads using a priority-based algorithm, favoring high-priority threads over threads with lower priority. If the producer has a priority higher than that of the consumer and the buffer is full, the producer will enter the while loop and either busy-wait or yield() to another runnable thread while waiting for count to be decremented to less than BUFFER SIZE.Aslongastheconsumer has a priority lower than that of the producer, it may never be scheduled by the JVM to run and therefore may never be able to consume an item and free up buffer space for the producer. In this situation, the producer is livelocked waiting for the consumer to free buffer space. We will see shortly that there is a better alternative than busy waiting or yielding while waiting for a desired event to occur. 5.1.2 Race Condition In Section Section 5.1, we saw an example of the consequences of a race condition on the shared variable count. Figure 5.1 illustrates how Java’s handling of concurrent access to shared data prevents race conditions. In describing this situation, we introduce a new keyword: synchronized. Every object in Java has associated with it a single lock. An object’s lock may be owned by a single thread. Ordinarily, when an object is being referenced (that is, when its methods are being invoked), the lock is ignored. When a method is declared to be synchronized, however, calling the method requires 5.1 Bounded Buffer 145 acquire lock object lock owner entry set Figure 5.2 Entry set. owning the lock for the object. If the lockisalreadyownedbyanotherthread, the thread calling the synchronized method blocks and is placed in the entry set for the object’s lock. The entry set represents the set of threads waiting for the lock to become available. If the lock is available when a synchronized method is called, the calling thread becomes the owner of the object’s lock and can enter the method. The lock is released when the thread exits the method. If the entry set for the lock is not empty when the lock is released, the JVM arbitrarily selects a thread from this set to be the owner of the lock. (When we say “arbitrarily,” we mean that the specification does not require that threads in this set be organized in any particular order. However, in practice, most virtual machines order threads in the wait set according to a FIFO policy.) Figure 5.2 illustrates how the entry set operates. If the producer calls the insert() method, as shown in Figure 5.1, and the lock for the object is available, the producer becomes the owner of the lock; it can then enter the method, where it can alter the value of count and other shared data. If the consumer attempts to call the synchronized remove() method while the producer owns the lock, the consumer will block because the lock is unavailable. When the producer exits the insert() method, it releases the lock. The consumer can now acquire the lock and enter the remove() method. 5.1.3 Deadlock At first glance, this approach appears at least to solve the problem of having a race condition on the variable count. Because both the insert() method and the remove() method are declared synchronized,wehaveensuredthatonly one thread can be active in either of these methods at a time. However, lock ownership has led to another problem. Assume that the buffer is full and the consumer is sleeping. If the producer calls the insert() method, it will be allowed to continue, because the lock is available. When the producer invokes the insert() method, it sees that the buffer is full and performs the yield() method. All the while, the producer still owns the lock for the object. When the consumer awakens and tries to call the remove() method (which would ultimately free up buffer space for the producer), it will block because it does not own the lock for the object. Thus, both the producer and the consumer are unable to proceed because (1) the producer is blocked waiting for the consumer to free space in the buffer and (2) the consumer is blocked waiting for the producer to release the lock. 146 Chapter 5 Process Synchronization // Producers call this method public synchronized void insert(E item) { while (count == BUFFER SIZE) { try { wait(); } catch (InterruptedException e) {} } buffer[in] = item; in = (in + 1) % BUFFER SIZE; ++count; notify(); } // Consumers call this method public synchronized E remove() { E item; while (count == 0) { try { wait(); } catch (InterruptedException e) {} } item = buffer[out]; out = (out + 1) % BUFFER SIZE; --count; notify(); return item; } Figure 5.3 insert() and remove() methods using wait() and notify(). By declaring each method as synchronized, we have prevented the race condition on the shared variables. However, the presence of the yield() loop has led to a possible deadlock. 5.1.4 Wait and Notify Figure 5.3 addresses the yield() loop by introducing two new Java methods: wait() and notify(). In addition to having a lock, every object also has associated with it a wait set consisting of a set of threads. This wait set is initially empty. When a thread enters a synchronized method, it owns the lock for the object. However, this thread may determine that it is unable to continue because a certain condition has not been met. That will happen, for 5.1 Bounded Buffer 147 acquire lock wait object lock owner entry set wait set Figure 5.4 Entry and wait sets. example, if the producer calls the insert() method and the buffer is full. The thread then will release the lock and wait until the condition that will allow it to continue is met, thus avoiding the previous deadlock situation. When a thread calls the wait() method, the following happens: 1. The thread releases the lock for the object. 2. The state of the thread is set to blocked. 3. Thethreadisplacedinthewaitsetfortheobject. Consider the example in Figure 5.3. If the producer calls the insert() method and sees that the buffer is full, it calls the wait() method. This call releases the lock, blocks the producer, and puts the producer in the wait set for the object. Because the producer has released the lock, the consumer ultimately enters the remove() method, where it frees space in the buffer for the producer. Figure 5.4 illustrates the entry and wait sets for a lock. (Note that wait() can result in an InterruptedException being thrown. Wewill cover this in Section Section 5.6.) How does the consumer thread signal that the producer may now proceed? Ordinarily, when a thread exits a synchronized method, the departing thread releases only the lock associated with the object, possibly removing a thread from the entry set and giving it ownership of the lock. However, at the end of the synchronized insert() and remove() methods, we have a call to the method notify().Thecalltonotify(): 1. Picks an arbitrary thread T from the list of threads in the wait set 2. Moves T from the wait set to the entry set 3. Sets the state of T from blocked to runnable T is now eligible to compete for the lock with the other threads.
Recommended publications
  • Synchronization Spinlocks - Semaphores
    CS 4410 Operating Systems Synchronization Spinlocks - Semaphores Summer 2013 Cornell University 1 Today ● How can I synchronize the execution of multiple threads of the same process? ● Example ● Race condition ● Critical-Section Problem ● Spinlocks ● Semaphors ● Usage 2 Problem Context ● Multiple threads of the same process have: ● Private registers and stack memory ● Shared access to the remainder of the process “state” ● Preemptive CPU Scheduling: ● The execution of a thread is interrupted unexpectedly. ● Multiple cores executing multiple threads of the same process. 3 Share Counting ● Mr Skroutz wants to count his $1-bills. ● Initially, he uses one thread that increases a variable bills_counter for every $1-bill. ● Then he thought to accelerate the counting by using two threads and keeping the variable bills_counter shared. 4 Share Counting bills_counter = 0 ● Thread A ● Thread B while (machine_A_has_bills) while (machine_B_has_bills) bills_counter++ bills_counter++ print bills_counter ● What it might go wrong? 5 Share Counting ● Thread A ● Thread B r1 = bills_counter r2 = bills_counter r1 = r1 +1 r2 = r2 +1 bills_counter = r1 bills_counter = r2 ● If bills_counter = 42, what are its possible values after the execution of one A/B loop ? 6 Shared counters ● One possible result: everything works! ● Another possible result: lost update! ● Called a “race condition”. 7 Race conditions ● Def: a timing dependent error involving shared state ● It depends on how threads are scheduled. ● Hard to detect 8 Critical-Section Problem bills_counter = 0 ● Thread A ● Thread B while (my_machine_has_bills) while (my_machine_has_bills) – enter critical section – enter critical section bills_counter++ bills_counter++ – exit critical section – exit critical section print bills_counter 9 Critical-Section Problem ● The solution should ● enter section satisfy: ● critical section ● Mutual exclusion ● exit section ● Progress ● remainder section ● Bounded waiting 10 General Solution ● LOCK ● A process must acquire a lock to enter a critical section.
    [Show full text]
  • Process Synchronisation Background (1)
    Process Synchronisation Background (1) Concurrent access to shared data may result in data inconsistency Maintaining data consistency requires mechanisms to ensure the orderly execution of cooperating processes Producer Consumer Background (2) Race condition count++ could be implemented as register1 = count register1 = register1 + 1 count = register1 count- - could be implemented as register2 = count register2 = register2 - 1 count = register2 Consider this execution interleaving with ―count = 5‖ initially: S0: producer execute register1 = count {register1 = 5} S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4} Solution: ensure that only one process at a time can manipulate variable count Avoid interference between changes Critical Section Problem Critical section: a segment of code in which a process may be changing Process structure common variables ◦ Only one process is allowed to be executing in its critical section at any moment in time Critical section problem: design a protocol for process cooperation Requirements for a solution ◦ Mutual exclusion ◦ Progress ◦ Bounded waiting No assumption can be made about the relative speed of processes Handling critical sections in OS ◦ Pre-emptive kernels (real-time programming, more responsive) Linux from 2.6, Solaris, IRIX ◦ Non-pre-emptive kernels (free from race conditions) Windows XP, Windows 2000, traditional UNIX kernel, Linux prior 2.6 Peterson’s Solution Two process solution Process Pi ◦ Mutual exclusion is preserved? ◦ The progress requirements is satisfied? ◦ The bounded-waiting requirement is met? Assumption: LOAD and STORE instructions are atomic, i.e.
    [Show full text]
  • Gnu Smalltalk Library Reference Version 3.2.5 24 November 2017
    gnu Smalltalk Library Reference Version 3.2.5 24 November 2017 by Paolo Bonzini Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled \GNU Free Documentation License". 1 3 1 Base classes 1.1 Tree Classes documented in this manual are boldfaced. Autoload Object Behavior ClassDescription Class Metaclass BlockClosure Boolean False True CObject CAggregate CArray CPtr CString CCallable CCallbackDescriptor CFunctionDescriptor CCompound CStruct CUnion CScalar CChar CDouble CFloat CInt CLong CLongDouble CLongLong CShort CSmalltalk CUChar CByte CBoolean CUInt CULong CULongLong CUShort ContextPart 4 GNU Smalltalk Library Reference BlockContext MethodContext Continuation CType CPtrCType CArrayCType CScalarCType CStringCType Delay Directory DLD DumperProxy AlternativeObjectProxy NullProxy VersionableObjectProxy PluggableProxy SingletonProxy DynamicVariable Exception Error ArithmeticError ZeroDivide MessageNotUnderstood SystemExceptions.InvalidValue SystemExceptions.EmptyCollection SystemExceptions.InvalidArgument SystemExceptions.AlreadyDefined SystemExceptions.ArgumentOutOfRange SystemExceptions.IndexOutOfRange SystemExceptions.InvalidSize SystemExceptions.NotFound SystemExceptions.PackageNotAvailable SystemExceptions.InvalidProcessState SystemExceptions.InvalidState
    [Show full text]
  • Learning Javascript Design Patterns
    Learning JavaScript Design Patterns Addy Osmani Beijing • Cambridge • Farnham • Köln • Sebastopol • Tokyo Learning JavaScript Design Patterns by Addy Osmani Copyright © 2012 Addy Osmani. All rights reserved. Revision History for the : 2012-05-01 Early release revision 1 See http://oreilly.com/catalog/errata.csp?isbn=9781449331818 for release details. ISBN: 978-1-449-33181-8 1335906805 Table of Contents Preface ..................................................................... ix 1. Introduction ........................................................... 1 2. What is a Pattern? ...................................................... 3 We already use patterns everyday 4 3. 'Pattern'-ity Testing, Proto-Patterns & The Rule Of Three ...................... 7 4. The Structure Of A Design Pattern ......................................... 9 5. Writing Design Patterns ................................................. 11 6. Anti-Patterns ......................................................... 13 7. Categories Of Design Pattern ............................................ 15 Creational Design Patterns 15 Structural Design Patterns 16 Behavioral Design Patterns 16 8. Design Pattern Categorization ........................................... 17 A brief note on classes 17 9. JavaScript Design Patterns .............................................. 21 The Creational Pattern 22 The Constructor Pattern 23 Basic Constructors 23 Constructors With Prototypes 24 The Singleton Pattern 24 The Module Pattern 27 iii Modules 27 Object Literals 27 The Module Pattern
    [Show full text]
  • Thread Management for High Performance Database Systems - Design and Implementation
    Nr.: FIN-003-2018 Thread Management for High Performance Database Systems - Design and Implementation Robert Jendersie, Johannes Wuensche, Johann Wagner, Marten Wallewein-Eising, Marcus Pinnecke, Gunter Saake Arbeitsgruppe Database and Software Engineering Fakultät für Informatik Otto-von-Guericke-Universität Magdeburg Nr.: FIN-003-2018 Thread Management for High Performance Database Systems - Design and Implementation Robert Jendersie, Johannes Wuensche, Johann Wagner, Marten Wallewein-Eising, Marcus Pinnecke, Gunter Saake Arbeitsgruppe Database and Software Engineering Technical report (Internet) Elektronische Zeitschriftenreihe der Fakultät für Informatik der Otto-von-Guericke-Universität Magdeburg ISSN 1869-5078 Fakultät für Informatik Otto-von-Guericke-Universität Magdeburg Impressum (§ 5 TMG) Herausgeber: Otto-von-Guericke-Universität Magdeburg Fakultät für Informatik Der Dekan Verantwortlich für diese Ausgabe: Otto-von-Guericke-Universität Magdeburg Fakultät für Informatik Marcus Pinnecke Postfach 4120 39016 Magdeburg E-Mail: [email protected] http://www.cs.uni-magdeburg.de/Technical_reports.html Technical report (Internet) ISSN 1869-5078 Redaktionsschluss: 21.08.2018 Bezug: Otto-von-Guericke-Universität Magdeburg Fakultät für Informatik Dekanat Thread Management for High Performance Database Systems - Design and Implementation Technical Report Robert Jendersie1, Johannes Wuensche2, Johann Wagner1, Marten Wallewein-Eising2, Marcus Pinnecke1, and Gunter Saake1 Database and Software Engineering Group, Otto-von-Guericke University
    [Show full text]
  • Concurrency & Parallel Programming Patterns
    Concurrency & Parallel programming patterns Evgeny Gavrin Outline 1. Concurrency vs Parallelism 2. Patterns by groups 3. Detailed overview of parallel patterns 4. Summary 5. Proposal for language Concurrency vs Parallelism ● Parallelism is the simultaneous execution of computations “doing lots of things at once” ● Concurrency is the composition of independently execution processes “dealing with lots of thing at once” Patterns by groups Architectural Patterns These patterns define the overall architecture for a program: ● Pipe-and-filter: view the program as filters (pipeline stages) connected by pipes (channels). Data flows through the filters to take input and transform into output. ● Agent and Repository: a collection of autonomous agents update state managed on their behalf in a central repository. ● Process control: the program is structured analogously to a process control pipeline with monitors and actuators moderating feedback loops and a pipeline of processing stages. ● Event based implicit invocation: The program is a collection of agents that post events they watch for and issue events for other agents. The architecture enforces a high level abstraction so invocation of an agent is implicit; i.e. not hardwired to a specific controlling agent. ● Model-view-controller: An architecture with a central model for the state of the program, a controller that manages the state and one or more agents that export views of the model appropriate to different uses of the model. ● Bulk Iterative (AKA bulk synchronous): A program that proceeds iteratively … update state, check against a termination condition, complete coordination, and proceed to the next iteration. ● Map reduce: the program is represented in terms of two classes of functions.
    [Show full text]
  • The Deadlock Problem
    The deadlock problem More deadlocks mutex_t m1, m2; Same problem with condition variables void p1 (void *ignored) { • lock (m1); - Suppose resource 1 managed by c1, resource 2 by c2 lock (m2); - A has 1, waits on 2, B has 2, waits on 1 /* critical section */ c c unlock (m2); Or have combined mutex/condition variable unlock (m1); • } deadlock: - lock (a); lock (b); while (!ready) wait (b, c); void p2 (void *ignored) { unlock (b); unlock (a); lock (m2); - lock (a); lock (b); ready = true; signal (c); lock (m1); unlock (b); unlock (a); /* critical section */ unlock (m1); One lesson: Dangerous to hold locks when crossing unlock (m2); • } abstraction barriers! This program can cease to make progress – how? - I.e., lock (a) then call function that uses condition variable • Can you have deadlock w/o mutexes? • 1/15 2/15 Deadlocks w/o computers Deadlock conditions 1. Limited access (mutual exclusion): - Resource can only be shared with finite users. 2. No preemption: - once resource granted, cannot be taken away. Real issue is resources & how required • 3. Multiple independent requests (hold and wait): E.g., bridge only allows traffic in one direction - don’t ask all at once (wait for next resource while holding • - Each section of a bridge can be viewed as a resource. current one) - If a deadlock occurs, it can be resolved if one car backs up 4. Circularity in graph of requests (preempt resources and rollback). All of 1–4 necessary for deadlock to occur - Several cars may have to be backed up if a deadlock occurs. • Two approaches to dealing with deadlock: - Starvation is possible.
    [Show full text]
  • Fair K Mutual Exclusion Algorithm for Peer to Peer Systems ∗
    Fair K Mutual Exclusion Algorithm for Peer To Peer Systems ∗ Vijay Anand Reddy, Prateek Mittal, and Indranil Gupta University of Illinois, Urbana Champaign, USA {vkortho2, mittal2, indy}@uiuc.edu Abstract is when multiple clients are simultaneously downloading a large multimedia file, e.g., [3, 12]. Finally applications k-mutual exclusion is an important problem for resource- where a service has limited computational resources will intensive peer-to-peer applications ranging from aggrega- also benefit from a k mutual exclusion primitive, preventing tion to file downloads. In order to be practically useful, scenarios where all the nodes of the p2p system simultane- k-mutual exclusion algorithms not only need to be safe and ously overwhelm the server. live, but they also need to be fair across hosts. We pro- The k mutual exclusion problem involves a group of pro- pose a new solution to the k-mutual exclusion problem that cesses, each of which intermittently requires access to an provides a notion of time-based fairness. Specifically, our identical resource called the critical section (CS). A k mu- algorithm attempts to minimize the spread of access time tual exclusion algorithm must satisfy both safety and live- for the critical resource. While a client’s access time is ness. Safety means that at most k processes, 1 ≤ k ≤ n, the time between it requesting and accessing the resource, may be in the CS at any given time. Liveness means that the spread is defined as a system-wide metric that measures every request for critical section access is satisfied in finite some notion of the variance of access times across a homo- time.
    [Show full text]
  • Model Checking Multithreaded Programs With
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Illinois Digital Environment for Access to Learning and Scholarship Repository Model Checking Multithreaded Programs with Asynchronous Atomic Methods Koushik Sen and Mahesh Viswanathan Department of Computer Science, University of Illinois at Urbana-Champaign. {ksen,vmahesh}@uiuc.edu Abstract. In order to make multithreaded programming manageable, program- mers often follow a design principle where they break the problem into tasks which are then solved asynchronously and concurrently on different threads. This paper investigates the problem of model checking programs that follow this id- iom. We present a programming language SPL that encapsulates this design pat- tern. SPL extends simplified form of sequential Java to which we add the ca- pability of making asynchronous method invocations in addition to the standard synchronous method calls and the ability to execute asynchronous methods in threads atomically and concurrently. Our main result shows that the control state reachability problem for finite SPL programs is decidable. Therefore, such mul- tithreaded programs can be model checked using the counter-example guided abstraction-refinement framework. 1 Introduction Multithreaded programming is often used in software as it leads to reduced latency, improved response times of interactive applications, and more optimal use of process- ing power. Multithreaded programming also allows an application to progress even if one thread is blocked for an I/O operation. However, writing correct programs that use multiple threads is notoriously difficult, especially in the presence of a shared muta- ble memory. Since threads can interleave, there can be unintended interference through concurrent access of shared data and result in software errors due to data race and atom- icity violations.
    [Show full text]
  • Lecture 26: Creational Patterns
    Creational Patterns CSCI 4448/5448: Object-Oriented Analysis & Design Lecture 26 — 11/29/2012 © Kenneth M. Anderson, 2012 1 Goals of the Lecture • Cover material from Chapters 20-22 of the Textbook • Lessons from Design Patterns: Factories • Singleton Pattern • Object Pool Pattern • Also discuss • Builder Pattern • Lazy Instantiation © Kenneth M. Anderson, 2012 2 Pattern Classification • The Gang of Four classified patterns in three ways • The behavioral patterns are used to manage variation in behaviors (think Strategy pattern) • The structural patterns are useful to integrate existing code into new object-oriented designs (think Bridge) • The creational patterns are used to create objects • Abstract Factory, Builder, Factory Method, Prototype & Singleton © Kenneth M. Anderson, 2012 3 Factories & Their Role in OO Design • It is important to manage the creation of objects • Code that mixes object creation with the use of objects can become quickly non-cohesive • A system may have to deal with a variety of different contexts • with each context requiring a different set of objects • In design patterns, the context determines which concrete implementations need to be present © Kenneth M. Anderson, 2012 4 Factories & Their Role in OO Design • The code to determine the current context, and thus which objects to instantiate, can become complex • with many different conditional statements • If you mix this type of code with the use of the instantiated objects, your code becomes cluttered • often the use scenarios can happen in a few lines of code • if combined with creational code, the operational code gets buried behind the creational code © Kenneth M. Anderson, 2012 5 Factories provide Cohesion • The use of factories can address these issues • The conditional code can be hidden within them • pass in the parameters associated with the current context • and get back the objects you need for the situation • Then use those objects to get your work done • Factories concern themselves just with creation, letting your code focus on other things © Kenneth M.
    [Show full text]
  • Bespoke Tools: Adapted to the Concepts Developers Know
    Bespoke Tools: Adapted to the Concepts Developers Know Brittany Johnson, Rahul Pandita, Emerson Murphy-Hill, and Sarah Heckman Department of Computer Science North Carolina State University, Raleigh, NC, USA {bijohnso, rpandit}@ncsu.edu, {emerson, heckman}@csc.ncsu.edu ABSTRACT Such manual customization is undesirable for several rea- Even though different developers have varying levels of ex- sons. First, to choose among alternative tools, a developer pertise, the tools in one developer's integrated development must be aware that alternatives exist, yet lack of awareness environment (IDE) behave the same as the tools in every is a pervasive problem in complex software like IDEs [3]. other developers' IDE. In this paper, we propose the idea of Second, even after being aware of alternatives, she must be automatically customizing development tools by modeling able to intelligently choose which tool will be best for her. what a developer knows about software concepts. We then Third, if a developer's situation changes and she recognizes sketch three such \bespoke" tools and describe how devel- that the tool she is currently using is no longer the optimal opment data can be used to infer what a developer knows one, she must endure the overhead of switching to another about relevant concepts. Finally, we describe our ongoing ef- tool. Finally, customization takes time, time that is spent forts to make bespoke program analysis tools that customize fiddling with tools rather than developing software. their notifications to the developer using them. 2. WHAT IS THE NEW IDEA? Categories and Subject Descriptors Our idea is bespoke tools: tools that automatically fit D.2.6 [Software Engineering]: Programming Environments| themselves to the developer using them.
    [Show full text]
  • An Enhanced Thread Synchronization Mechanism for Java
    SOFTWARE—PRACTICE AND EXPERIENCE Softw. Pract. Exper. 2001; 31:667–695 (DOI: 10.1002/spe.383) An enhanced thread synchronization mechanism for Java Hsin-Ta Chiao and Shyan-Ming Yuan∗,† Department of Computer and Information Science, National Chiao Tung University, 1001 Ta Hsueh Road, Hsinchu 300, Taiwan SUMMARY The thread synchronization mechanism of Java is derived from Hoare’s monitor concept. In the authors’ view, however, it is over simplified and suffers the following four drawbacks. First, it belongs to a category of no-priority monitor, the design of which, as reported in the literature on concurrent programming, is not well rated. Second, it offers only one condition queue. Where more than one long-term synchronization event is required, this restriction both degrades performance and further complicates the ordering problems that a no-priority monitor presents. Third, it lacks the support for building more elaborate scheduling programs. Fourth, during nested monitor invocations, deadlock may occur. In this paper, we first analyze these drawbacks in depth before proceeding to present our own proposal, which is a new monitor-based thread synchronization mechanism that we term EMonitor. This mechanism is implemented solely by Java, thus avoiding the need for any modification to the underlying Java Virtual Machine. A preprocessor is employed to translate the EMonitor syntax into the pure Java codes that invoke the EMonitor class libraries. We conclude with a comparison of the performance of the two monitors and allow the experimental results to demonstrate that, in most cases, replacing the Java version with the EMonitor version for developing concurrent Java objects is perfectly feasible.
    [Show full text]