Motivation for Concurrency

Object-Oriented

 Concurrent programming is increasing rele-

vant to:

for Network Programming in

{ Leverage hardware/software advances

C++

. e.g., multi-pro cessors and OS supp ort

{ Increase p erformance

Douglas C. Schmidt

. e.g., overlap computation and communication

Washington University, St. Louis

{ Improve resp onse-time

. e.g., GUIs and network servers

http://www.cs.wustl.ed u/s chmi dt/

{ Simplify program structure

[email protected] u

. e.g., synchronous vs. asynchronous network IPC

1 2

Challenges and Solutions

Motivation for Distributi on

 However, developing ecient, robust, and

extensible concurrent networking applications

is hard

 Bene ts of :

{ e.g., must address complex topics that are less

problematic or not relevant for non-concurrent,

{ Collab oration ! connectivity and interworking

stand-alone applications

{ Performance ! multi-pro cessing and lo cality

 Object-oriented OO techniques and OO

{ Reliability and availability ! replication

language features help to enhance concur-

rent software quality factors

{ Scalability and p ortability ! mo dularity

{ Key OO techniques include design patterns and

frameworks

{ Extensibility ! dynamic con guration and recon-

guration

{ Key OO language features include classes, inheri-

tance, dynamic binding, and parameterized typ es

{ Cost e ectiveness ! op en systems and resource

sharing

{ Key software quality factors include mo dularity,

extensibility, portability, reusability, and correct-

ness

3 4

Caveats

Tutorial Outline

 OO is not a panacea

{ However, when used prop erly it helps minimize

 Outline key OO networking and concurrency

\accidental" complexity and improve software qual-

concepts and OS platform mechanisms

ity factors

{ Emphasis is on practical solutions

 Advanced OS features provide additional func-

tionality and p erformance, e.g.,

 Examine several examples in detail

{ Multi-threading

1. Concurrent WWW client/server

{ Multi-pro cessing

2. OO framework forlayered active objects

{ Synchronization

{ Shared memory

 Discuss general concurrent programming strate-

gies

{ Explicit dynamic linking

{ Communication proto cols and IPC mechanisms

6 5

De nitions

Software Development

 Concurrency

Environment

{ \Logically" simultaneous pro cessing

 The topics discussed here are largely inde-

{ Do es not imply multiple pro cessing elements

p endent of OS, network, and programming

language

{ Currently used successfully on UNIX and Windows

 Parallelism

NT platforms, running on TCP/IP and IPX/SPX

networks, using C++

{ \Physically" simultaneous pro cessing

{ Involves multiple pro cessing elements and/or inde-

p endent device op erations

 Examples are illustrated using freely avail-

able ADAPTIVE Communication Environ-

ment ACE OO framework comp onents

 Distribution

{ Although ACE is written in C++, the principles

covered in this tutorial apply to other OO lan-

guages

{ Partition system/application into multiple comp o-

nents that can reside on di erent hosts

. e.g., Java, Ei el, Smalltalk, etc.

{ Implies message passing as primary IPC mecha-

nism

7 8

Concurrency vs. Parallelism

Stand-alone vs. Distribut ed

Application Architectures SERVER

PRINTER maxfdp1 read_fds COMPUTER CLIENT WORK REQUEST FILE CD ROM SYSTEM WORK WORK WORK CLIENT REQUEST REQUEST REQUEST (1) STAND-ALONE APPLICATION ARCHITECTURE

CLIENT CLIENT TIME NAME SERVICE CYCLE CONCURRENT SERVER SERVICE SERVICE DISPLAY SERVER SERVICE

FI LE CPU1 CPU2 CPU3 CPU4 NETWORK SERVICE CLIENT PRINT WORK SERVICE REQUEST CD ROM WORK PRINTER WORK WORK FILE SYSTEM CLIENT REQUEST REQUEST REQUEST (2) DISTRIBUTED APPLICATION ARCHITECTURE

CLIENT CLIENT

PARALLEL SERVER

9 10

Sources of Complexity

Sources of Complexity cont'd

 Concurrent network application development

 Accidental complexity results from limi ta-

tions with to ols and techniques used to de-

exhibits b oth inherent and accidental com-

velop concurrent applications, e.g.,

plexity

{ Lack of p ortable, reentrant, typ e-safe and extensi-

ble system call interfaces and comp onent libraries

 Inherent complexity results from fundamen-

tal challenges

{ Inadequate debugging supp ort and lack of concur-

rent and distributed program analysis to ols

{ Concurrent programming

* Eliminating \race conditions"

{ Widespread use of algorithmic decomp osition

* Deadlo ck avoidance

*Fair

. Fine for explaining concurrent programming con-

*Performance optimization and tuning

cepts and algorithms but inadequate for develop-

ing large-scale concurrent network applications

{ Distributed programming

* Addressing the impact of latency

{ Continuous rediscovery and reinvention of core con-

*Fault tolerance and high availability

cepts and comp onents

* Load balancing and service partitioning

* Consistent ordering of distributed events

11 12

OO Contribution s to Concurrent

Application s

Design Patterns

 UNIX concurrent network programming has

 Design patterns represent solutions to prob-

traditionally b een p erformed using low-level

lems that arise when developing software

OS mechanisms, e.g.,

within a particular context

* fork/exec

{ i.e.,\Patterns == problem/solution pairs in a con-

* Shared memory, mmap, and SysV semaphores

text"

* Signals

* so ckets/select

* POSIX pthreads and Solaris threads

 Patterns capture the static and dynamic struc-

ture and collab oration among key partici-

 OO design patterns and frameworks elevate

pants in software designs

development to fo cus on application con-

cerns, e.g.,

{ They are particularly useful for articulating how

and why to resolve non-functional forces

{ Service functionality and p olicies

{ Service con guration

 Patterns facilitate reuse of successful soft-

{ Concurrent event demultiplexing and event han-

ware architectures and designs

dler dispatching

{ Service concurrency and synchronization

14 13

Active Object Pattern

Collab orati on in the Active

loop {

Pattern Client m = actQueue.remove() Object Interface dispatch (m) } ResultHandle m1() ResultHandle m2() Scheduler t1 : t2 : t2msg_q : t3 : ResultHandle m3() Task Task Message_Queue Task dispatch() Activation work() m1'() PRODUCER TASK m2'() 1 Queue PHASE

VISIBLE m3'() 1 PRODUCER insert() put(msg) TO PASS MSG remove() CLIENTS 1 enqueue_tail(msg) 1 ENQUEUE MSG () 1 n RUN SVC svc() INVISIBLE ASYNCHRONOUSLY TO Resource PHASE dequeue_head(msg) Method QUEUEING CLIENTS Representation Objects DEQUEUE MSG

CONSUMER TASK work()

PHASE PASS MSG put(msg)

CONSUMER

 Intent: decouples the thread of metho d ex-

ecution from the thread of metho d invo ca-

tion

15 16

Frameworks

Di erences Between Class

Libraries and Frameworks

 A framework is: \An integrated collection of comp onents that col-

{ APPLICATION NETWORKING

rate to pro duce a reusable architecture for a lab o SPECIFIC family of related applications" LOGIC INVOKES MATH ADTS

EVENT USER DATA

LOOP INTERFACE

Frameworks di er from conventional class

 BASE

libraries:

CLASS

Frameworks are \semi-complete" applications

1. LIBRARIES

2. Frameworks address a particular application do-

main NETWORKING MATH USER

INTERFACE

Frameworks provide \" 3. APPLICATION CALL INVOKES SPECIFIC BACKS LOGIC EVENT LOOP ADTS

DATABASE

 Typically, applications are develop ed by in-

from and instantiating framework heriting OBJECT-ORIENTED

FRAMEWORK comp onents

17 18

The ADAPTIVE Communication

Class Categories in ACE

Environment ACE

APPLICATION- APPLICATIONS APPLICATIONS DISTRIBUTED SPECIFIC SERVICES AND GATEWAY TOKEN LOGGING NAME TIME APPLICATIONS COMPONENTS SERVER SERVER SERVER SERVER SERVER

FRAMEWORKS SERVICE CORBA ACCEPTOR CONNECTOR AND CLASS HANDLER HANDLER Network CATEGORIES ADAPTIVE SERVICE EXECUTIVE (ASX) Services Stream C++ THREAD LOG SERVICE SHARED APPLICATION- MANAGER MSG CONFIG- MALLOC WRAPPERS Framework REACTOR URATOR SYNCH SPIPE SOCK_SAP/ FIFO MEM SYSV INDEPENDENT WRAPPERS SAP TLI_SAP SAP MAP WRAPPERS

OS ADAPTATION LAYER Service C THREAD STREAM SOCKETS/ NAMED SELECT/ DYNAMIC MEMORY SYSTEM Initialization APIS LIBRARY PIPES TLI PIPES POLL LINKING MAPPING V IPC

PROCESS/THREAD COMMUNICATION VIRTUAL MEMORY Interprocess SUBSYSTEM SUBSYSTEM SUBSYSTEM

GENERAL POSIX AND WIN32 SERVICES Communication Service Configurator Reactor Concurrency

global

 A set of C++ wrapp ers, class categories,

and frameworks based on design patterns

19 20

Class Categories in ACE cont'd

 Resp onsibilities of each class category

Concurrency Overview

{ IPC encapsulates lo cal and/or remote IPC mech-

anisms

 A thread of control is a single sequence of

{ Service Initialization encapsulates active/passive

execution steps p erformed in one or more

connection establishment mechanisms

programs

{ Concurrency encapsulates and extends multi-threading

{ One program ! standalone systems

and synchronization mechanisms

{ More than one program ! distributed systems

{ Reactor p erforms event demultiplexing and event

handler dispatching

 Traditional OS pro cesses contain a single

{ Service Configurator automates con guration

thread of control

and recon guration by encapsulating explicit dy-

namic linking mechanisms

{ This simpli es programming since a sequence of

execution steps is protected from unwanted inter-

{ Stream Framework mo dels and implements layers

ference by other execution sequences:::

and partitions of hierarchically-integrated commu-

nication software

{ Network Services provides distributed naming,

logging, lo cking, and routing services

22 21

Traditional Approaches to OS Evaluating Traditional OS

Concurrency Pro cess-based Concurrency

1. Device drivers and programs with signal han-

 Advantages

dlers utilize a limit ed form of concurrency

{ Easy to keep pro cesses from interfering

 e.g., asynchronous I/O

. Apro cess combines security, protection, and ro-

bustness

 Note that concurrency encompasses more than

multi-threading:: :

 Disadvantages

2. Many existing programs utilize OS pro cesses

1. Complicated to program, e.g.,

to provide \coarse-grained" concurrency

{ Signal handling may b e tricky

 e.g.,

{ Client/server database applications

{ Shared memory may b e inconvenient

{ Standard network daemons like UNIX inetd

2. Inecient

{ The OS kernel is involved in synchronization and

 Multiple OS pro cesses may share memory via mem-

pro cess management

ory mapping or shared memory and use semaphores

to co ordinate execution

{ Dicult to exert ne-grained control over schedul-

 The OS kernel scheduler dictates pro cess b ehavior ing and priorities

23 24

Lightwei gh t Concurrency

Mo dern OS Concurrency

 Mo dern OSs provide lightweight mechanisms

that manage and synchronize multiple threads

within a pro cess

 Mo dern OS platforms typically provide a

standard set of APIs that handle

{ Some systems also allow threads to synchronize

across multiplepro cesses

1. Pro cess/thread creation and destruction

2. Various typ es of pro cess/thread synchronization

and mutual exclusion

 Bene ts of threads

1. Relatively simple and ecient to create, control,

3. Asynchronous facilities for interrupting long-running

synchronize, and collab orate

pro cesses/threads to rep ort errors and control pro-

gram b ehavior

{ Threads share many pro cess resources by default

2. Improve p erformance by overlapping computation

 Once the underlying concepts are mastered,

and communication

it's relatively easy to learn di erent concur-

rency APIs

{ Threads may also consume less resources than

pro cesses

{ e.g., traditional UNIX pro cess op erations, Solaris

threads, POSIX pthreads, WIN32 threads, Java

threads, etc.

3. Improve program structure

{ e.g., compared with using asynchronous I/O

26 25

Single-thr eaded vs.

Hardware and OS Concurrency

Multi-thr eaded RPC

Supp ort

CLIENTCLIENT KERNEL SERVER

USER

 Most mo dern OS platforms provide kernel

REQUEST

supp ort for multi-threading

SERVICE USER THREAD KERNEL EXECUTES

BLOCKED

e.g., SunOS multi-pro cessing MP mo del RESPONSE 

SINGLE-

There are 4 primary abstractions

THREADED RPC {

1. Pro cessing elements hardware KERNEL

USER SERVER USER KERNEL

REQUEST

Kernel threads kernel SERVICE 2.

EXECUTES

3. Lightweight pro cesses user/kernel RESPONSE KERNEL

USER SERVER

REQUEST

4. Application threads user

RESPONSE SERVICE

EXECUTES

{ Sun MP thread semantics work for b oth uni-pro cessors

MULTI-

and multi-pro cessors:: :

CLIENT THREADED RPC

27 28

Sun MP Mo del cont'd

Application Threads

 Most pro cess resources are equally accessi-

ble to all threads in a pro cess, e.g., LEVEL

-

* Virtual memory

USER LWP LWP LWP LWP LWP LWP LWP

* User p ermissions and access control privileges

* Op en les

* Signal handlers LEVEL

-

Each thread also contains unique informa-

PE PE PE PE PE PE PE PE 

tion, e.g.,

KERNEL Identi er

SHARED MEMORY *

* Register set e.g., PC and SP

* Run-time stack Signal mask PE LWP *

UNIX

* Priority PROCESSING LIGHTWEIGHT PROCESS

THREAD

Thread-sp eci c data e.g., errno

ELEMENT PROCESS *

 Note, there is generally no MMU protection

 Application threads may be b ound and/or

for separate threads within a single pro cess:: :

unb ound

29 30

Kernel-level vs. User-level

Synchronization Mechanisms

Threads

 Threads share resources in a pro cess ad-

dress space

 Application and system characteristics in u-

ence the choice of user-level vs. kernel-level

threading

 Therefore, they must use synchronization

mechanisms to co ordinate their access to

 A high degree of \virtual" application con-

shared data

currency implies user-level threads i.e., un-

b ound threads

{ e.g., desktop windowing system on a uni-pro cessor

 Traditional OS synchronization mechanisms

are very low-level, tedious to program, error-

prone, and non-p ortable

 A high degree of \real" application paral-

lelism implies lightweight pro cesses LWPs

i.e., b ound threads

 ACE encapsulates these mechanisms with

{ e.g., video-on-demand server or matrix multiplica-

higher-level patterns and classes

tion on a multi-pro cessor

32 31

Common OS Synchronization Additional ACE Synchronization

Mechanisms Mechanism

1. Mutual exclusion lo cks 1. Guards

 An exception-safe scop ed lo cking mechanism

 Serialize access to a shared resource

2. Barriers

2. Counting semaphores

 Allows threads to synchronize their completion

 Synchronize execution

3. Token

3. Readers/writer lo cks

 Provides absolute scheduling order and simpli es

 Serialize access to resources whose contents are

multi-threaded event lo op integration

searched more than changed

4. Task

4. Condition variables

 Provides higher-level \" semantics for

 Used to blo ck until shared data changes state

concurrent applications

5. File lo cks

5. Thread-sp eci c storage

 System-wide readers/write lo cks access by le-

 Low-overhead, contention-free storage

name

34 33

Graphical Notation

Concurrency Mechanisms in ACE

MANAGERS Thread CONDITIONS global Thread Process Null OBJECT Manager Manager ACTIVE OBJECTS Condition PROCESS : CLASS THREAD SYNCH MUTEX LOCKS Task Condition Barrier File CLASS LOCK TYPE TYPE CLASS TEMPLATE UTILITY Recursive GUARDS Thread Atomic TSS CLASS Token Mutex Op Guard

SYNCH WRAPPERS Read CLASS Guard Null RW Semaphore CATEGORY INHERITS INSTANTIATES Mutex Mutex Mutex Write Thread Guard Thread Semaphore ABSTRACT Mutex Thread CLASS Process RW Thread CONTAINS USES Process RW Mutex Process Control A

Mutex Mutex Semaphore

35 36

Concurrent WWW Client/Server

Example

WWW Client/Server Architecture

The following example illustrates a concur-  1: GET

/www.cs.wustl.edu

rent OO architecture for a high-p erformance

WWW HTTP/1.0 WWW client/server WWW CLIENT SERVER

2: index.html

 Key system requirements are:

1. Robust implementation of HTTP 1.0 proto col

{ i.e., resilient to incorrect or malicious WWW

clients/servers

2. Extensible for use with other proto cols

{ e.g., DICOM, HTTP 1.1, etc.

3. Leverage multi-pro cessor hardware and OS soft-

 www.w3.org/pub/WWW/Proto cols/HTTP/

ware

{ e.g., Supp ort various concurrency mo dels

37 38

Pseudo-co de for Concurrent

Multi-threaded WWW Server

WWW Server

Architecture

Pseudo-co de for master server

CPU1 CPU2 CPU3 CPU4 

server void void master

worker worker worker worker f

ize work queue and

1: dequeue(msg) 1: dequeue(msg) 1: dequeue(msg) 1: dequeue(msg) initial rt 80

2: process() 2: process() 2: process() 2: process() listener endp oint at p o

spawnpoolofworker threads

foreach p ending work request from clients f

ork queue : Message receive and queue request on w

Queue g

exit pro cess

1: select() g main 2: recv(msg)

3: enqueue(msg)

 Pseudo-co de for workers

void worker void

f

foreach work request on queue

dequeue and pro cess request

 Worker threads execute within one pro cess

exit thread

g

40 39

OO Design Interlude Thread Entry Point

 Each thread executes a function that serves

 Q: Why use a work queue to store mes-

as the \entry p oint" into a separate thread

of control

sages, rather than directly reading from I/O

descriptors?

{ Note algorithmic design:::

typedef u_long COUNTER;

 A:

// Track the number of requests

COUNTER request_count; // At file scope.

{ Separation of concerns

// Entry point into the WWW HTTP 1.0 protocol.

. Shield application from semantics of thread li-

void *worker Message_Queue *msg_queue

brary user-level vs. kernel-level threads

{

Message_Block *mb; // Message buffer.

{ Promotes more ecient use of multiple CPUs via

load balancing

while msg_queue->dequeue_head mb > 0 {

// Keep track of number of requests.

++request_count;

{ Enables transparent interp ositioning

// Print diagnostic

{ Makes it easier to shut down the system correctly

cout << "got new request " << OS::thr_self 

<< endl;

// Identify and perform WWW Server

 Drawbacks

// request processing here...

}

{ Using a message queue may lead to greater con-

return 0;

text switching and synchronization overhead:::

}

41 42

Pseudo-co de for recv requests

Master Server Driver Function

 e.g.,

 The master driver function in the WWW

void recv requests Message Queue *msg queue

Server might be structured as follows:

f

initialize so cket listener endp oint at p ort 80

// Thread function prototype.

typedef void **THR_FUNCvoid *;

foreach incoming request

static const int NUM_THREADS = /* ... */;

f

use select to wait for new connections or data

int main int argc, char *argv[] {

if connection

parse_args argc, argv;

establish connections using accept

Message_Queue msg_queue; // Queue client requests.

else if data f

use so ckets calls to read HTTP requests

// Spawn off NUM_THREADS to run in parallel.

into msg

for int i = 0; i < NUM_THREADS; i++

queue.enqueue tail msg; msg

thr_create 0, 0, THR_FUNC &worker,

void * &msg_queue, THR_NEW_LWP, 0; g

g

// Initialize network device and recv HTTP work requests.

g

thr_create 0, 0, THR_FUNC &recv_requests,

void * &msg_queue, THR_NEW_LWP, 0;

 The \grand mistake:"

// Wait for all threads to exit.

{ Avoid the temptation to \step-wise re ne" this

while thr_join 0, &t_id, void ** 0 == 0

algorithmically decomp osed pseudo-co de directly

continue; // ...

into the detailed design and implementation of the

}

WWW Server!

43 44

Limitations with the WWW

Server

Overcoming Limitation s via OO

 The algorithmic decomp osition tightly cou-

ples application-sp eci c functionality with

 The algorithmic decomp osition illustrated

various con guration-related characteristics,

ab ove sp eci es to o many low-level details

e.g.,

{ Furthermore, the excessive coupling complicates

{ The HTTP 1.0 proto col

reusability, extensibility, and p ortability:: :

{ The numb er of services p er pro cess

{ The time when services are con gured into a pro-

 In contrast, OO fo cuses on decoupling application-

cess

sp eci c b ehavior from reusable application-

indep endent mechanisms

 The solution is not portable since it hard-

co des

{ SunOS 5.x threading

 The OO approach describ ed b elow uses reusable

object-oriented framework comp onents and

{ so ckets and select

commonly recurring design patterns

 There are race conditions in the co de

46 45

Basic Synchronization

Eliminating Race Condition s Part

Mechanisms

1 of 2

 One approach to solve the serializati on prob-

 Problem

lem is to use OS mutual exclusion mecha-

{ The concurrent server illustrated ab ove contains

nisms explicitl y, e.g.,

\race conditions"

. e.g., auto-increment of global variable request count

// SunOS 5.x, implicitly "unlocked".

is not serialized prop erly

mutex_t lock;

typedef u_long COUNTER;

COUNTER request_count;

void *worker Message_Queue *msg_queue

 Forces

{

// in function scope ...

{ Mo dern shared memory multi-pro cessors use deep

mutex_lock &lock;

caches and weakly ordered memory mo dels

++request_count;

mutex_unlock &lock;

{ Access to shared data must b e protected from cor-

// ...

ruption

}

 However, adding these mutex * calls explic-

 Solution

itly is inelegant, obtrusive, error-prone, and

{ Use synchronization mechanisms

non-p ortable

48 47

C++ Wrapp ers for

Porting Thread Mutex to

Synchronization

Windows NT

 To address portability problems, de ne a

Mutex  Win32 version of Thread

C++ wrapp er:

class Thread_Mutex

class Thread_Mutex

{

{

public:

public:

Thread_Mutex void {

Thread_Mutex void {

InitializeCriticalSection &lock_;

mutex_init &lock_, USYNCH_THREAD, 0;

}

}

~Thread_Mutex void {

~Thread_Mutex void { mutex_destroy &lock_; }

DeleteCriticalSection &lock_;

int acquire void { return mutex_lock &lock_; }

}

int tryacquire void { return mutex_trylock &lock; }

int acquire void {

int release void { return mutex_unlock &lock_; }

EnterCriticalSection &lock_; return 0;

}

private:

int tryacquire void {

mutex_t lock_; // SunOS 5.x serialization mechanism.

TryEnterCriticalSection &lock_; return 0;

void operator= const Thread_Mutex &;

}

Thread_Mutex const Thread_Mutex &;

int release void {

};

LeaveCriticalSection &lock_; return 0;

}

private:

 Note, this mutual exclusion class interface

CRITICAL_SECTION lock_; // Win32 locking mechanism.

// ...

is portable to other OS platforms

49 50

Automated Mutex Acquisitio n and

Using the C++ Thread Mutex

Release

Wrapp er

 To ensure mutexes are lo cked and unlo cked,

 Using the C++ wrapp er helps improve p orta-

we'll de ne a template class that acquires

bility and elegance:

and releases a mutex automatically

Thread_Mutex lock;

typedef u_long COUNTER;

template

COUNTER request_count;

class Guard

{

// ...

public:

void *worker Message_Queue *msg_queue

Guard LOCK &m: lock_ m { lock_.acquire ; }

{

~Guard void { lock_.release ; }

lock.acquire ;

// ...

++request_count;

private:

lock.release ; // Don't forget to call!

LOCK &lock_;

}

// ...

}

 Guard uses the C++ idiom whereby a con-

 However, it do es not solve the obtrusiveness

structor acquires a resource and the destruc-

or error-proneness problems:::

tor releases the resource

52 51

Using the Guard Class

OO Design Interlude

 Using the Guard class helps reduce errors:

 Q: Why is Guard parameterized by the typ e

Thread_Mutex lock;

typedef u_long COUNTER;

of LOCK?

COUNTER request_count;

void *worker Message_Queue *msg_queue {

 A: there are many lo cking mechanisms that

// ...

b ene t from Guard functionality, e.g.,

{ Guard monitor lock;

++request_count;

}

* Non-recursive vs recursive mutexes

// ...

* Intra-pro cess vs inter-pro cess mutexes

}

* Readers/writer mutexes

* Solaris and System V semaphores

* File lo cks

Mutex and Guard  However, using the Thread

* Null mutex

classes is still overly obtrusive and subtle

e.g., beware of elided braces:: :

 In ACE, all synchronization classes use the

Wrapp er and Adapter patterns to provide

identical interfaces that facilitate parame-

 A more elegant solution incorp orates pa-

terization

rameterized typ es, overloading, and the Dec-

orator pattern

53 54

Structure of the Wrapp er Pattern

The Wrapp er Pattern Intent

 1: request ()

\Encapsulate low-level, stand-alone functions within

{ Wrapper

yp e-safe, mo dular, and p ortable class interfaces" t client

request()

This pattern resolves the following forces

 2: specific_request()

that arises when using native C-level OS

APIs

How to avoid tedious, error-prone, and non-p ortable

1. Wrappee

programming of low-level IPC and lo cking mecha-

nisms specific_request()

2. How to combine multiple related, but indep endent,

functions into a single cohesive abstraction

55 56

Using the Wrapp er Pattern for

Lo cking

The Intent 1: acquire () 

Mutex \Convert the interface of a class into another in- client { acquire() terface client exp ects"

release()

Adapter lets classes work together that couldn't

tryacquire() .

otherwise b ecause of incompatible interfaces

This pattern resolves the following force that

Solaris 

arises when using conventional OS inter-

mutex_lock() 2: mutex_lock() faces

Howtoprovide an interface that expresses the sim-

mutex_unlock() 1.

rities of seeminglying di erent OS mechanisms

mutex_trylock() ila

such as lo cking or IPC

57 58

Using the Adapter Pattern for

Lo cking

Structure of the Adapter Pattern

client LOCK 1: request () Guard Adapter Guard() client 1: Guard() ~Guard() request() Mutex 2: specific_request() Guard Mutex Adaptee1 Guard() acquire() specific_request()Adaptee2 ~Guard() 2: acquire() specific_request()Adaptee3 3: mutex_lock() specific_request() POSIX Win32 pthread_mutex Solaris EnterCritical _lock()

mutex_lock() Section()

59 60

Transparentl y Parameterizing

Thread-safe Version of

Synchonization Using C++

Concurrent Server

 The following C++ template class uses the

Op class, only one change  Using the Atomic

\Decorator" pattern to de ne a set of atomic

is made to the co de

op erations on a typ e parameter:

if defined MT_SAFE

template

typedef Atomic_Op<> COUNTER; // Note default parameters...

class Atomic_Op {

else

public:

typedef Atomic_Op COUNTER;

Atomic_Op TYPE c = 0 { count_ = c; }

endif /* MT_SAFE */

COUNTER request_count;

TYPE operator++ void {

Guard m lock_; return ++count_;

}

operator TYPE  {

count is now serialized automatically  request

Guard m lock_;

return count_;

void *worker Message_Queue *msg_queue

}

{

// Other arithmetic operations omitted...

//...

// Calls Atomic_Op::operator++void

private:

++request_count;

LOCK lock_;

//...

TYPE count_;

}

};

62 61

Eliminating Race Condition s Part

2 of 2

Condition Object Overview

 Problem

 Condition objects are used to \sleep/wait"

{ A naive implementation of Message Queue will

until a particular condition involving shared

lead to race conditions

data is signaled

. e.g., when messages in di erent threads are en-

{ Conditions may be arbitrarily complex C++ ex-

queued and dequeued concurrently

pressions

{ Sleeping is often more ecient than busy waiting:: :

 Forces

{ Pro ducer/consumer concurrency is common, but

 This allows more complex scheduling deci-

requires careful attention to avoid overhead, dead-

sions, compared with a mutex

lo ck, and prop er control

{ i.e., a mutex makes other threads wait, whereas a

condition object allows a thread to make itself wait

foraparticular condition involving shared data

 Solution

{ Utilize a \Condition object"

63 64

Condition Object Usage

Condition Object Usage cont'd

 Aparticular idiom is asso ciated with acquir-

ing resources via Condition objects

 Another idiom is asso ciated with releasing

resources via Condition objects

// Global variables

Mutex lo ck; // Initially unlo cked. static Thread

void release resources void f

// Initially unlo cked.

// Automatically acquire the lo ck.

Mutex> cond lo ck; static Condition

Mutex> monitor lo ck; Guard

void acquire resources void f

// Atomically mo dify shared information here:::

// Automatically acquire the lo ck.

Mutex> monitor lo ck; Guard

cond.signal ; // Could also use cond.broadcast

// monitor destructor automatically releases lo ck.

// Check condition note the use of while

g

while condition expression is not true

// Sleep if not expression is not true.

cond.wait ;

 Note how the use of the Guard idiom sim-

pli es the solution

// Atomically mo dify shared information here:::

{ e.g.,nowwe can't forget to release the lo ck!

// monitor destructor automatically releases lo ck.

g

66 65

Condition Object Interface

Overview of Message Queue and

 In ACE, the Condition class is a wrapp er for

Message Blo ck Classes

the native OS condition variable abstraction

{ e.g., cond t on SunOS 5.x, pthread cond t for

Queue is comp osed of one ormore  A Message

POSIX, and a custom implementation on Win32

Message Blocks

template

{ Similar to BSD mbufs or SVR4 STREAMS m blks

class Condition

{

public:

{ Goal is to enable ecient manipulation of arbitrarily-

// Initialize the condition variable.

large message payloads without incurring unneces-

Condition const MUTEX&, int=USYNC_THREAD;

sary memory copying overhead

// Implicitly destroy the condition variable.

~Condition void;

// Block on condition, or until abstime has

// passed. If abstime == 0 use blocking semantics.

 Message Blocks are linked together by prev

int wait Time_Value *abstime = 0 const;

and next p ointers

// Signal one waiting thread.

int signal void const;

// Signal *all* waiting threads.

int broadcast void const;

 A Message Block may also be linked to a

private:

cond_t cond_; // Solaris condition variable.

chain of other Message Blocks

const MUTEX &mutex_; // Reference to mutex lock.

};

67 68

The Message Blo ck Class

Message Queue and

Message Blo ck Object Diagram

 The contents of a message are represented

Block by a Message

: Message SYNCH

Message_Block Queue : Message class

Block {

class Message_Queue; head_ friend next_

tail_ public:

prev_

Block size_t size, : Payload Message_

cont_

Message_Type type = MB_DATA,

Message_Block *cont = 0,

*data = 0,

: Message char

r *alloc = 0;

Block Allocato ... next_ // prev_ : Payload

: Message private:

cont_ *base_;

Block char

Pointer to beginning of payload. next_ //

: Message

Block *next_; prev_ Message_

Block

Pointer to next message in the queue. cont_ //

: Payload

Block *prev_;

: Payload Message_

// Pointer to previous message in the queue.

Message_Block *cont_;

// Pointer to next fragment in this message.

// ...

};

69 70

OO Design Interlude

OO Design Interlude

 Here's an example of the interfaces used in

ACE

 Q: What is the Allo cator object in the Mes-

{ Note the use of the Adapter pattern to integrate

sage Blo ck constructor?

third-party memory allo cators

class Allocator {

// ...

 A: It provides extensible mechanism to con-

virtual void *malloc size_t nbytes = 0;

trol how memory is allo cated and deallo-

virtual void free void *ptr = 0;

cated

};

{ This makes it p ossible to switch memory manage-

template

ment p olicies without mo difying the Message Block

class Allocator_Adapter : public Allocator {

class

// ...

virtual void *malloc size_t nbytes {

{ By default, the p olicy is to use new and delete,

return allocator_.malloc nbytes;

but it's easy to use other schemes, e.g.,

}

* Shared memory

ALLOCATOR allocator_;

*Persistent memory

};

* Thread-sp eci c memory

Allocator_Adapter sh_malloc;

{ A similar technique is also used in the C++ Stan-

Allocator_Adapter new_malloc;

dard Template Library

Allocator_Adapter p_malloc;

Allocator_Adapter p_malloc;

71 72

The Message Queue Class Public

The Message Queue Class

Interface

Private Interface

Queue is a thread-safe queueing  A Message

 The bulk of the work is p erformed in the

facility for Message Blocks

private metho ds

{ The bulk of the lo cking is p erformed in the public

metho ds

private:

// Routines that actually do the enqueueing and

template

// dequeueing do not hold locks.

class Message_Queue

int enqueue_prio_i Message_Block *, Time_Value *;

{

int enqueue_tail_i Message_Block *new_item;

public:

int dequeue_head_i Message_Block *&first_item;

// Default high and low water marks.

enum { DEFAULT_LWM = 0, DEFAULT_HWM = 4096 };

// Check the boundary conditions do not hold locks.

int is_empty_i void const;

// Initialize a Message_Queue.

int is_full_i void const;

Message_Queue size_t hwm = DEFAULT_HWM,

size_t lwm = DEFAULT_LWM;

// ...

// Check if full or empty hold locks

// Parameterized types for synchronization

int is_empty void const;

// primitives that control concurrent access.

int is_full void const;

// Note use of C++ "traits"

SYNCH_STRATEGY::MUTEX lock_;

// Enqueue and dequeue Message_Block *'s.

SYNCH_STRATEGY::CONDITION not_empty_cond_;

int enqueue_prio Message_Block *&, Time_Value *;

SYNCH_STRATEGY::CONDITION not_full_cond_;

int enqueue_tail Message_Block *, Time_Value *;

};

int dequeue_head Message_Block *&, Time_Value *;

73 74

The Message Queue Class

Implementation

OO Design Interlude

 Uses ACE synchronization wrapp ers

 Q: How should lo cking be p erformed in an

OO class?

template int

Message_Queue::is_empty_i void const {

return cur_bytes_ <= 0 && cur_count_ <= 0;

}

 A: In general, the following general pattern

is useful:

template int

Message_Queue::is_full_i void const {

{ \Public functions should lo ck, private functions

return cur_bytes_ > high_water_mark_;

should not lo ck"

}

. This also helps to avoid inter-class metho d deadlo ck:::

template int

Message_Queue::is_empty void const {

Guard m lock_;

{ This is actually a variant on a common OO pat-

return is_empty_i ;

tern that \public functions should check, private

}

functions should trust"

template int

{ Naturally, there are exceptions to this rule:::

Message_Queue::is_full void const {

Guard m lock_;

return is_full_i ;

}

76 75

// Dequeue the front item on the list and return it

// Queue new item at the end of the list.

// to the caller.

template int

template int

Message_Queue::enqueue_tail

Message_Queue::dequeue_head

Message_Block *new_item, Time_Value *tv

Message_Block *&first_item, Time_Value *tv

{

{

Guard monitor lock_;

Guard monitor lock_;

// Wait while the queue is full.

// Wait while the queue is empty.

while is_full_i 

while is_empty_i 

{

{

// Release the lock_ and wait for timeout, signal,

// Release the lock_ and wait for timeout, signal,

// or space becoming available in the list.

// or a new message being placed in the list.

if not_full_cond_.wait tv == -1

if not_empty_cond_.wait tv == -1

return -1;

return -1;

}

}

// Actually enqueue the message at the end of the list.

// Actually dequeue the first message.

enqueue_tail_i new_item;

dequeue_head_i first_item;

// Tell blocked threads that list has a new item!

// Tell blocked threads that list is no longer full.

not_empty_cond_.signal ;

not_full_cond_.signal ;

}

}

77 78

Overcoming Algorithm ic

Decomp osition Limitations

A Concurrent OO WWW Server

 The previous slides illustrate low-level OO

techniques, idioms, and patterns that:

 The following example illustrates an OO so-

lution to the concurrent WWW Server

1. Reduce accidental complexity e.g.,

{ The active objects are based on the ACE Task

{ Automate synchronization acquisition and release

class

C++ constructor/destructor idiom

{ Improve consistency of synchronization interface

Adapter and Wrapp er patterns

 There are several ways to structure concur-

rency in an WWW Server

2. Eliminate race conditions

1. Single-threaded

2. Thread-p er-request

 The next slides describ e higher-level ACE

framework comp onents and patterns that:

3. Thread-p er-session

1. Increase reuse and extensibility e.g.,

4. Thread-p o ol

{ Decoupling solution from particular service, IPC

and demultiplexing mechanisms

2. Improve the exibility of concurrency control

79 80

Single-th reade d WWW Server Thread-p er-Req ue st WWW

Architecture Server Architecture

2: HANDLE INPUT WWW SERVER 2: HANDLE INPUT WWW SERVER 3: CREATE PROCESSOR 3: CREATE PROCESSOR 4: ACCEPT CONNECTION : HTTP 4: ACCEPT CONNECTION : HTTP 5: ACTIVATE PROCESSOR Processor 5: SPAWN THREAD Processor : HTTP : HTTP Processor : HTTP Acceptor : HTTP Acceptor Processor : HTTP Processor : Reactor : Reactor

6: PROCESS HTTP REQUEST 6: PROCESS HTTP REQUEST

SERVER 1: CONNECT SERVER 1: CONNECT

CLIENT CLIENT CLIENT

CLIENT CLIENT CLIENT

81 82

Thread-p er-Ses si on WWW Server

Thread-Pool WWW Server

Architecture Architecture

WWW SERVER : Msg 2: HANDLE INPUT WWW SERVER 2: CREATE, ACCEPT, : HTTP Queue 3: ENQUEUE REQUEST AND ACTIVATE Processor 3: SPAWN THREAD HTTP_PROCESSOR worker : HTTP thread : HTTP Handler : HTTP : HTTP : HTTP : HTTP 5: DEQUEUE & Handler Processor Processor : HTTP Processor Acceptor worker PROCESS Handler thread REQUEST : HTTP : Reactor Acceptor worker worker thread thread : Reactor

4: PROCESS HTTP REQUEST

6: PROCESS HTTP REQUEST 1: BIND SERVER CLIENT 1: HTTP REQUEST SERVER CLIENT CLIENT CLIENT

CLIENT CLIENT

83 84

Design Patterns in the WWW

Architecture of the WWW Server Client/Server

Thread-per WWW SERVER Thread-per Session svc_run svc_run Request svc_run Acceptor svc_run

Active Object Thread Pool Connector : Msg : HTTP : Options Processor Queue Half-Sync/ Service Half-Async Configurator : HTTP : HTTP : HTTP Handler Handler Handler STRATEGIC Double Checked PATTERNS Reactor : HTTP Locking Acceptor : Reactor

TACTICAL

PATTERNS Proxy Strategy Adapter Singleton

85 86

Structure of the

The Reactor Pattern

select (handles); foreach h in handles { Concrete

if (h is output handler) Event_Handler Intent  table[h]->handle_output () ; if (h is input handler) table[h]->handle_input (); APPLICATION if (h is signal handler)

Event_Handler DEPENDENT

\Decouples event demultiplexing and event han- { table[h]->handle_signal ();

} - rmed in re- dler dispatching from the services p erfo handle_input() this->expire_timers (); APPLICATION handle_output() INDEPENDENT sp onse to events" handle_signal() handle_timeout() - n get_handle() n A 1

1 1

 This pattern resolves the following forces

r event-driven software: fo Reactor 1 Timer_Queue handle_events() 1 n schedule_timer(h)

register_handler(h) Handles

How to demultiplex multiple typ es of events from

{ cancel_timer(h) remove_handler(h) expire_timer(h) multiple sources of events eciently within a single expire_timers() n

thread of control 1 1

{ How to extend application b ehavior without requir-

ing changes to the event dispatching framework

 Participants in the Reactor pattern

87 88

Using the Reactor in the WWW

Collab oration in the Reactor

Server

Pattern

REGISTERED svc_run svc_run callback : OBJECTS 4: getq(msg) main Concrete reactor 5:svc(msg) svc_run program Event_Handler : Reactor Reactor() : HTTP INITIALIZE Handler register_handler(callback) : HTTP REGISTER HANDLER Handler : HTTP : Message : HTTP get_handle() : Event Queue LEVEL EXTRACT HANDLE LEVEL HandlerHandler Processor MODE : Event APPLICATION handle_events() APPLICATION Handler START EVENT LOOP : Event 2: recv_request(msg) INITIALIZATION select() Handler3: putq(msg) FOREACH EVENT DO handle_input() DATA ARRIVES 1: handle_input() handle_output() OK TO SEND : Handle LEVEL handle_signal() LEVEL Table SIGNAL ARRIVES FRAMEWORK MODE FRAMEWORK : Reactor handle_timeout() TIMER EXPIRES remove_handler(callback) OS EVENT DEMULTIPLEXING INTERFACE EVENT HANDLING REMOVE HANDLER handle_close() CLEANUP LEVEL LEVEL KERNEL

KERNEL

89 90

The HTTP Handler Public

Interface

The HTTP Handler Protected

Interface

 The HTTP Handler is the Proxy for commu-

nicating with clients e.g., WWW browsers

like Netscap e or IE

 The following metho ds are invoked by call-

{ Together with Reactor, it implements the asyn-

backs from the Reactor

chronous p ortion of Half-Sync/Half-Async pattern

protected:

// Reusable base class.

// Reactor notifies when client's timeout.

template

virtual int handle_timeout const Time_Value &,

class HTTP_Handler :

const void *

public Svc_Handler

{

NULL_SYNCH> {

// Remove from the Reactor.

public:

Service_Config::reactor ->remove_handler

// Entry point into HTTP_Handler, called by

this, READ_MASK;

// HTTP_Acceptor.

}

virtual int open void * {

// Register with Reactor to handle client input.

// Reactor notifies when client HTTP requests arrive.

Service_Config::reactor ->register_handler

virtual int handle_input HANDLE;

this, READ_MASK;

// Receive/frame client HTTP requests e.g., GET.

// Register timeout in case client doesn't

int recv_request Message_Block &*;

// send any HTTP requests.

};

Service_Config::reactor ->schedule_timer

this, 0, Time_Value HTTP_CLIENT_TIMEOUT;

}

92 91

Structure of the Active Object

Pattern

The Active Object Pattern

loop { Intent  Client m = actQueue.remove() Interface dispatch (m)

} \Decouples metho d execution from metho d invo-

{ ResultHandle m1()

es synchronized access to shared

cation and simpli ResultHandle m2() Scheduler

y concurrent threads" resources b ResultHandle m3() dispatch() m1'() Activation m2'() 1 Queue VISIBLE m3'() 1 insert()

TO

This pattern resolves the following forces  remove()

CLIENTS 1

r concurrent communication software: fo 1

1 n

How to allow blo cking op erations such as read { INVISIBLE

and write to execute concurrently TO Resource Method

CLIENTS Representation Objects

{ How to serialize concurrent access to shared object

state

{ How to simplify comp osition of indep endent ser-

 Intent: decouples the thread of metho d ex-

vices

ecution from the thread of metho d invo ca-

tion

94 93

Using the Active Object Pattern

Implementi ng the Active Object

in the WWW Server

Pattern in ACE

REGISTERED svc_run svc_run 4: getq(msg) SYNCH OBJECTS 5:svc(msg) SYNCH svc_run Service Task open()=0 - close()=0 : HTTP put()=0 Handler: HTTP : Message : HTTP - Queue svc()=0 Handler: HTTP Processor A LEVEL LEVEL : Event Handler APPLICATION Handler: Event INDEPENDENT APPLICATION APPLICATION 2: recv_request(msg) APPLICATIONSPECIFIC Handler: Event SYNCH 3: putq(msg) Handler Service Message Event Object Queue 1: handle_input() Handler suspend()=0 resume()=0 : Handle handle_input() A LEVEL LEVEL Table handle_output() Shared handle_exception() FRAMEWORK FRAMEWORK Object : Reactor handle_signal() handle_timeout () init()=0 OS EVENT DEMULTIPLEXING INTERFACE handle_close() fini ()=0 get_handle()=0 info()=0 A A LEVEL LEVEL KERNEL

KERNEL

95 96

ACE Task Class Public Interface

Collab oration in ACE Active

Objects

 C++ interface for message pro cessing

* Tasks can register with a Reactor

* They can b e dynamically linked

t2 : Task * They can queue data

2: enqueue (msg) 3: svc () * They can run as \active objects" 4: dequeue (msg)

: Message

5: do_work(msg) template

Queue

class Task : public Service_Object ACTIVE

6: put (msg) {

1: put (msg) public:

Task Thread_Manager * = 0, Message_Queue * = 0;

// Initialization/termination routines.

int open void *args = 0 = 0;

t3 : Task virtual

virtual int close u_long flags = 0 = 0;

t1 : Task

Transfer msg to queue for immediate processing. : Message //

Queue

virtual int put Message_Block *, Time_Value * = 0 = 0; : Message

Queue ACTIVE

// Run by a daemon thread for deferred processing.

ACTIVE

virtual int svc void = 0;

// Turn the task into an active object.

int activate long flags, int n_threads = 1;

97 98

OO Design Interlude

Task Class Public Interface

 Q: What is the svc run function and why

cont'd

is it a static metho d?

 The following metho ds are mostly used within

 A: OS thread spawn APIs require a C-style

the put and svc ho oks

function as the entry p oint into a thread

// Accessors to internal queue.

Message_Queue *msg_queue void;

void msg_queue Message_Queue *;

 The Stream class category encapsulates the

// Accessors to thread manager.

svc run function within the Task::activate

Thread_Manager *thr_mgr void;

metho d:

void thr_mgr Thread_Manager *;

// Insert message into the message list.

template int

int putq Message_Block *, Time_Value *tv = 0;

Task::activate long flags, int n_threads

{

// Extract the first message from the list blocking.

if thr_mgr  == NULL

int getq Message_Block *&mb, Time_Value *tv = 0;

thr_mgr Service_Config::thr_mgr ;

// Hook into the underlying thread library.

thr_mgr ->spawn_n

static void *svc_run Task *;

n_threads, &Task::svc_run,

void * this, flags;

}

99 100

OO Design Interlude

OO Design Interlude cont'd

 Q: \How can groups of collab orating threads

 Task::svc run is static metho d used as the

be managed atomically?"

entry p oint to execute an instance of a ser-

vice concurrently in its own thread

 A: Develop a \thread manager" class

template void *

Task::svc_run Task *t

{ Thread Manager is a collection class

{

Thread_Control tc t->thr_mgr ; // Record thread ID.

. It provides mechanisms for susp ending and re-

suming groups of threads atomically

// Run service handler and record return value.

void *status = void * t->svc ;

. It implements barrier synchronization on thread

exits

tc.status status;

t->close u_long status;

// Status becomes `return' value of thread...

Manager also shields applications from in- { Thread

return status;

compabitilities b etween di erent OS thread libraries

// Thread removed from thr_mgr automatically

// on return...

. It is integrated into ACE via the Task::activate

}

metho d

102 101

The HTTP Pro cessor Class

Using the Singleton

 Pro cesses HTTP requests using the \Thread-

Po ol" concurrency mo del to implement the

 The HTTP Processor is implemented as a Sin-

synchronous task p ortion of the Half-Sync/Half-

gleton that is created \on demand"

Async pattern

// Singleton access point.

class HTTP_Processor : Task {

HTTP_Processor *

public:

HTTP_Processor::instance void

// Singleton access point.

{

static HTTP_Processor *instance void;

// Beware of race conditions!

if instance_ == 0 {

// Pass a request to the thread pool.

instance_ = new HTTP_Processor;

virtual put Message_Block *, Time_Value *;

}

return instance_;

// Entry point into a pool thread.

}

virtual int svc int

{

// Constructor creates the thread pool.

Message_Block *mb = 0; // Message buffer.

HTTP_Processor::HTTP_Processor void

// Wait for messages to arrive.

{

for ;;

// Inherited from class Task.

{

activate THR_NEW_LWP, num_threads;

getq mb; // Inherited from class Task;

}

// Identify and perform HTTP Server

// request processing here...

104 103

Using the Double-Checked

The Double-Checked Lo cking

Lo cking Optimization Pattern for

Optimization Pattern

the WWW Server

 Intent

{ \Ensures atomic initialization of objects and elim-

unnecessary lo cking overhead on each ac- inates if (instance_ == NULL) {

cess" mutex_.acquire (); if (instance_ == NULL) instance_ = new HTTP_Processor;

mutex_.release ();

This pattern resolves the following forces:  }

return instance_;

1. Ensures atomic initialization or access to objects,

rdless of thread scheduling order rega HTTP

Processor

2. Keeps lo cking overhead to a minimum

static instance()

e.g., only lo ck on rst access { static instance_

Mutex

 Note, this pattern assumes atomic memory

access:: :

105 106

Structure of the

Half-Sync/Half-Async Pattern

Half-Sync/Half-Async Pattern

 Intent

\An that decouples synchronous { SYNC SYNC

I/O from asynchronous I/O in a system to simplify TASK

TASK 1 3

programming e ort without degrading execution eciency"

SYNC TASK LAYER TASK LAYER TASK

SYNCHRONOUS 2

SYNCHRONOUS

 This pattern resolves the following forces

for concurrent communication systems:

1, 4: read(data)

{ How to simplify programming for higher-level com- munication tasks LAYER LAYER MESSAGE QUEUES QUEUEING

QUEUEING

. These are p erformed synchronously via Active

3: enqueue(data) Objects

ASYNC

How to ensure ecient lower-level I/O communi-

{ TASK 2: interrupt cation tasks

EXTERNAL

. These are p erformed asynchronously via the Re-

EVENT SOURCES

actor TASK LAYER TASK LAYER ASYNCHRONOUS

ASYNCHRONOUS

108 107

Using the Half-Sync/Half-Async

Collab oration in the

Pattern in the WWW Server

Pattern Half-Sync/Half-Async svc_run svc_run svc_run

External Async Message Sync Event Source Task Queue Task 4: getq(msg) notification() 5:svc(msg) LEVEL EXTERNAL EVENT : HTTP

read(msg) SYNCH TASK RECV MSG Processor

ASYNC PHASE work() PROCESS MSG enqueue(msg) ENQUEUE MSG : HTTP read(msg) Handler : Message PHASE DEQUEUE MSG : HTTP QUEUEING Queue work() LEVEL Handler EXECUTE TASK : Event : HTTP QUEUEING Handler Handler SYNC 2: recv_request(msg) PHASE : Event 3: putq(msg) Handler: Event

Handler

 This illustrates input pro cessing output pro-

1: handle_input()

is similar cessing : Reactor LEVEL

ASYNC TASK

109 110

Joining Async and Sync Tasks in

The Acceptor Pattern

the WWW Server

 Intent

{ \Decouple the passive initialization of a service

 The following metho ds form the b oundary

from the tasks p erformed once the service is ini-

between the Async and Sync layers

tialized"

template int

HTTP_Handler::handle_input HANDLE h

 This pattern resolves the following forces

{

for network servers using interfaces like so ck-

Message_Block *mb = 0;

ets or TLI:

// Try to receive and frame message.

if recv_request mb == HTTP_REQUEST_COMPLETE { 1. How to reuse passive connection establishment co de

Service_Config::reactor ->remove_handler for each new service

this, READ_MASK;

Service_Config::reactor ->cancel_timer this;

2. How to make the connection establishment co de

// Insert message into the Queue.

portable across platforms that may contain so ck-

HTTP_Processor::instance ->put mb;

ets but not TLI, or vice versa

}

}

3. How to enable exible p olicies for creation, con-

HTTP_Processor::put Message_Block *msg,

nection establishment, and concurrency

Time_Value *timeout {

// Insert the message on the Message_Queue

4. How to ensure that a passive-mo de descriptor is

// inherited from class Task.

not accidentally used to read or write data

putq msg, timeout;

}

111 112

Collab orati on in the Acceptor

Structure of the Acceptor Pattern

Pattern n SOCK_Stream Concrete_Svc_Handler 1 SOCK_Acceptor peer_acceptor_ Concrete acc : sh: reactor : Server : SOCK Concrete Acceptor Svc_Handler Reactor Svc Handler Acceptor Acceptor open() LAYER LAYER open() INITIALIZE PASSIVE open() ENDPOINT APPLICATION APPLICATION register_handler(acc) REGISTER HANDLER get_handle() EXTRACT HANDLE SVC_HANDLER PHASE handle_events() A PEER_ACCEPTOR START EVENT LOOP PEER_STREAM ENDPOINT Svc FOREACH EVENT DO select() Acceptor INITIALIZATION Handler handle_input() CONNECTION EVENT sh = make_svc_handler() INITS open() make_svc_handler() CREATE, ACCEPT, accept_svc_handler (sh) accept_svc_handler() AND ACTIVATE OBJECT activate_svc_handler (sh) A activate_svc_handler() register_handler(sh) PHASE REGISTER HANDLER SERVICE LAYER LAYER FOR CLIENT I/O open() get_handle()

INITIALIZATION CONNECTION CONNECTION handle_input() EXTRACT HANDLE handle_input() DATA EVENT sh = make_svc_handler(); svc() accept_svc_handler (sh); PROCESS MSG handle_close() activate_svc_handler (sh); PHASE CLIENT SHUTDOWN SERVICE

PROCESSING SERVER SHUTDOWN handle_close() Event Handler 1 Reactor LAYER LAYER handle_input() n REACTIVE

REACTIVE

 Acceptor factory creates, connects, and ac-

tivates a Svc Handler

113 114

The HTTP Acceptor Class

Using the Acceptor Pattern in the

Interface

WWW Server

 The HTTP Acceptor class implements the Ac-

ceptor pattern : HTTP

: HTTP : HTTP

Handlers i.e., it accepts connections and initializes HTTP : HTTP Handler {

Acceptor Handler Handler

: Svc : Svc : Svc template

HTTP_Acceptor : public Handler class

: Acceptor Handler Handler

// This is a ``trait.''

Acceptor,

PTOR> 1: handle_input() PEER_ACCE

2: sh = make_svc_handler() ACTIVE : HTTP { CONNECTIONS 3: accept_svc_handler(sh) Handler public:

4: activate_svc_handler(sh)

// Called when HTTP_Acceptor is dynamically linked.

int init int argc, char *argv[]; : Svc virtual

PASSIVE LISTENER : Reactor Handler

// Called when HTTP_Acceptor is dynamically unlinked.

virtual int fini void;

// ...

};

116 115

The HTTP Acceptor Class

The Service Con gurator Pattern

Implementation

// Initialize service when dynamically linked.

 Intent

template int

HTTP_Acceptor::init int argc, char *argv[]

{ \Decouples the b ehavior of network services from

{

the p oint in time at which these services are con-

Options::instance ->parse_args argc, argv;

gured into an application"

// Initialize the communication endpoint.

peer_acceptor .open

PA::PEER_ADDR Options::instance ->port ;

 This pattern resolves the following forces

for network daemons:

// Register to accept connections.

Service_Config::reactor ->register_handler

this, READ_MASK;

{ How to defer the selection of a particulartyp e, or

}

aparticular implementation, of a service until very

late in the design cycle

// Terminate service when dynamically unlinked.

. i.e., at installation-timeor run-time

template int

HTTP_Acceptor::fini void

{ How to build complete applications by comp osing

{

multiple indep endently develop ed services

// Shutdown threads in the pool.

HTTP_Processor::instance ->

msg_queue ->deactivate ;

{ How to recon gure and control the b ehavior of the

service at run-time

// Wait for all threads to exit.

HTTP_Processor::instance ->thr_mgr ->wait ;

}

118 117

Structure of the Service

Collab oration in the Service

Con gurator Pattern

Con gurator Pattern Concrete Service Object svc : : Service : Service main() : Reactor Service_Object Config Repository LAYER LAYER

CONFIGURE Service_Config() APPLICATION APPLICATION FOREACH SVC ENTRY DO process_directives() DYNAMICALLY LINK link_service() SERVICE init(argc, argv) INITIALIZE SERVICE Service Service register_handler(svc) REGISTER SERVICE Object Config MODE get_handle() EXTRACT HANDLE insert() 1 STORE IN REPOSITORY suspend() CONFIGURATION run_event_loop() resume() 1 START EVENT LOOP init() handle_events() A FOREACH EVENT DO fini() 1 handle_input() INCOMING EVENT LAYER LAYER info() n Service SHUTDOWN EVENT handle_close() 1 MODE remove_handler(svc) CONFIGURATION CONFIGURATION Repository CLOSE SERVICE unlink_service() UNLINK SERVICE fini() remove() EVENT HANDLING

Event Handler 1 Reactor LAYER LAYER n REACTIVE

REACTIVE

119 120

Using the Service Con gurator

Service Con gurator

Pattern in the WWW Server

Implementati on in C++ The concurrent WWW Server is con gured : Reactive  SERVICE WWW Server

CONFIGURATOR

initial i zed via a con guration script : TP and RUNTIME WWW Server : Service Object

SHARED

cat ./svc.conf : Service : Service 

OBJECTS

TP_WWW_Server Service_Object *

Repository Object dynamic

.dll:make_TP_WWW_Server

: TPR www_server

$PORT -t $THREADS" WWW Server "-p : Service : Reactor Config : Service

Object

 Factory function that dynamically allo cates

a Half-Sync/Half-Async WWW Server ob-

ject

 Existing service is based on Half-Sync/Half-

extern "C" Service_Object *make_TP_WWW_Server void;

Async \`Thread p o ol"' pattern

Service_Object *make_TP_WWW_Server void

{

 Other versions could b e single-threaded, could

return new HTTP_Acceptor;

// ACE dynamically unlinks and deallocates this object.

use other concurrency strategies, and other

}

proto cols

121 122

Parameterizing IPC Mechanisms

Main Program for WWW Server

with C++ Templates

 Dynamically con gure and execute the WWW

Server

 To switch between a so cket-based service

{ Note that this is totally generic!

and a TLI-based service, simply instantiate

with a di erent C++ wrapp er int main int argc, char *argv[]

{

Service_Config daemon;

// Determine the communication mechanisms.

// Initialize the daemon and dynamically

if defined USE_SOCKETS

// configure the service.

typedef SOCK_Acceptor PEER_ACCEPTOR;

daemon.open argc, argv;

elif defined USE_TLI

typedef TLI_Acceptor PEER_ACCEPTOR;

// Loop forever, running services and handling

endif

// reconfigurations.

Service_Object *make_TP_WWW_Server void

daemon.run_reactor_event_loop ;

{

/* NOTREACHED */

return new HTTP_Acceptor;

}

}

123 124

Structure of the Connector

The Connector Pattern

Pattern Intent  Concrete_Svc_Handler

n SOCK_Stream 1 SOCK_Connector

\Decouple the active initialization of a service from

{ Concrete Concrete

rmed once a service is initialized" the task p erfo Svc Handler Connector open() LAYER LAYER APPLICATION

APPLICATION

This pattern resolves the following forces

 SVC_HANDLER

r network clients that use interfaces like

fo A PEER_CONNECTOR

ets or TLI: so ck INITS Connector connect_svc_handler()

PEER_STREAM

How to reuse active connection establishment co de

1. activate_svc_handler() r each new service fo Svc Handlern handle_output() connect(sh, addr) open() A LAYER

LAYER

2. How to make the connection establishment co de

connect_svc_handler

portable across platforms that may contain so ck- CONNECTION

CONNECTION 1: (sh, addr);

ets but not TLI, or vice versa activate_svc_handler

2: (sh);

3. How to enable exible p olicies for creation, con- nection establishment, and concurrency Event

Handler

How to eciently establish connections with large 4. 1 Reactor LAYER

LAYER n

r over a long delay path numb er of p eers o handle_output() REACTIVE

REACTIVE

126 125

Collab orati on in the Connector

Collab orati on in the Connector

Pattern

Pattern

con : peer_stream_ sh: reactor : Client Connector : SOCK Svc_Handler Reactor Connector con : peer_stream_ sh: reactor : Client connect(sh, addr) Connector : SOCK Svc_Handler Reactor FOREACH CONNECTION Connector INITIATE CONNECTION connect_svc_handler(sh, addr) connect(sh, addr) FOREACH CONNECTION ASYNC CONNECT connect() connect_svc_handler(sh, addr) PHASE INITIATE CONNECTION PHASE INSERT IN REACTOR register_handler(con)

INITIATION connect() CONNECTION SYNC CONNECT handle_events() START EVENT LOOP PHASE ACTIVATE OBJECT activate_svc_handler(sh) select() open() FOREACH EVENT DO handle_output() INSERT IN REACTOR register_handler(sh) CONNECTION COMPLETE activate_svc_handler(sh) SEVICE INITIALIZATION get_handle() CONNECTION INITIATION/ EXTRACT HANDLE PHASE ACTIVATE OBJECT open() SERVICE handle_events() register_handler(sh) START EVENT LOOP INSERT IN REACTOR INITIALIZATION select() get_handle() FOREACH EVENT DO EXTRACT HANDLE handle_input()

PHASE DATA ARRIVES SERVICE handle_input() DATA ARRIVES PROCESSING PROCESS DATA svc()

PHASE svc() SERVICE PROCESS DATA

PROCESSING

 Synchronous mo de

 Asynchronous mo de

127 128

Using the Connector Pattern in a

WWW Client

ACE Stream Example: Parallel

: HTTP

Copy : HTTP : HTTP I/O Handler: HTTP Handler : HTTP Handler : HTTP Handler Handler Handler : Svc

: Svc : Svc

Illustrates an implementation of the classic Handler: Svc  Handler : Svc Handler : Svc

Handler

problem

Handler Handler \b ounded bu er"

{ The program copies stdin to stdout via the use of

ded Stream : Connector ACTIVE a multi-threa CONNECTIONS : HTTP Handler

PENDING : Svc

A Stream is an implementation of the \Layer : Reactor 

CONNECTIONS Handler

Service Comp osition" pattern

{ The intent is to allow exible con guration of lay-

ered pro cessing mo dules

 e.g., in the Netscap e HTML parser

129 130

Pro ducer and Consumer Object

Interactions

a Stream in ACE

Implementi ng 5: write()

 A Stream contains a stack of Modules Consumer

Module active 4: svc()

 Each Module contains two Tasks

{ In this example, the \read" Task is always ignored w is uni-directional since the data o 3: put()

Producer active

Each Task contains a Message Queue and a

 Module 2: svc()

p ointer to a Thread Manager

1: read()

131 132

Pro ducer Interface

// Run in a separate thread.

int

Producer::svc void

 e.g.,

{

for int n; ;  {

// Allocate a new message.

// typedef short-hands for the templates.

Message_Block *mb = new Message_Block BUFSIZ;

typedef Stream MT_Stream;

typedef Module MT_Module;

// Keep reading stdin, until we reach EOF.

typedef Task MT_Task;

if n = read 0, mb->rd_ptr , mb->size  <= 0

// Define the Producer interface.

{

// Send a shutdown message to other thread and exit.

class Producer : public MT_Task

mb->length 0;

{

this->put_next mb;

public:

break;

// Initialize Producer.

}

virtual int open void *

else

{

{

// activate is inherited from class Task.

mb->wr_ptr n; // Adjust write pointer.

activate THR_NEW_LWP;

}

// Send the message to the other thread.

this->put_next mb;

// Read data from stdin and pass to consumer.

}

virtual int svc void;

}

// ...

return 0;

};

}

133 134

Consumer Class Interface

// The consumer dequeues a message from the Message_Queue,

// writes the message to the stderr stream, and deletes

// the message. The Consumer sends a 0-sized message to

// inform the consumer to stop reading and exit.

 e.g.,

int

Consumer::svc void

// Define the Consumer interface.

{

Message_Block *mb = 0;

class Consumer : public MT_Task

{

// Keep looping, reading a message out of the queue,

public:

// until we get a message with a length == 0,

// Initialize Consumer.

// which informs us to quit.

virtual int open void *

{

for ;;

// activate is inherited from class Task.

{

activate THR_NEW_LWP;

int result = getq mb;

}

if result == -1 break;

// Enqueue the message on the Message_Queue for

int length = mb->length ;

// subsequent processing in svc.

virtual int put Message_Block*, Time_Value* = 0

if length > 0

{

write 1, mb->rd_ptr , length;

// putq is inherited from class Task.

return putq mb, tv;

delete mb;

}

if length == 0 break;

// Receive message from producer and print to stdout.

}

virtual int svc void;

return 0;

};

}

135 136

Main Driver Function

Evaluation of the Stream Class

Category

 e.g.,

 Structuring active objects via a Stream al-

int main int argc, char *argv[]

lows \interp ositioning"

{

// Control hierachically-related active objects.

{ Similar to adding a lter in a UNIX pip eline

MT_Stream stream;

// Create Producer and Consumer Modules and push

// them onto the Stream. All processing is then

// performed in the Stream.

 New functionality may be added by \push-

ing" a new pro cessing Mo dule onto a Stream,

stream.push new MT_Module "Consumer",

new Consumer;

e.g.,

stream.push new MT_Module "Producer",

new Producer;

stream.push new MT_Module "Consumer",

new Consumer

// Barrier synchronization: wait for the threads,

stream.push new MT_Module "Filter",

// to exit, then exit ourselves.

new Filter;

Service_Config::thr_mgr ->wait ;

stream.push new MT_Module "Producer",

return 0;

new Producer;

}

138 137

Call Center Manager Example

SUPER SUPER VISOR SUPER

VISOR VISOR

Concurrency Strategies

 Developing correct, ecient, and robust con-

CCM Session Router

applications is challenging Stream Module current

Event Filter

Module

 Below, we examine a numb er of strategies

that addresses challenges related to the fol- wing: Event Analyzer lo

Module

{ Concurrency control ASX Switch Adapter

RUN-TIME Module

{ Library design Thread creation TELECOM { SWITCHES EVENT

SERVER

{ Deadlo ck and starvation avoidance

140 139

General Threading Guidelin es

Thread Creation Strategies

 A threaded program should not arbitraril y

 Use threads for indep endent jobs that must

enter non-threaded i.e., \unsafe" co de

maintain state for the life of the job

 Threaded co de may refer to unsafe co de

only from the main thread

 Don't spawn new threads for very short jobs

{ e.g.,beware of errno problems

 Use threads to take advantage of CPU con-

currency

 Use reentrant OS library routines \ r" rather

than non-reentrant routines

 Only use \b ound" threads when absolutely

necessary

 Beware of thread global pro cess op erations

{ e.g., le I/O

 If p ossible, tell the threads library how many

threads are exp ected to be active simulta-

neously

exit3T  Make sure that main terminates via thr

setconcurrency { e.g., use thr

rather than exit2 or \falling o the end"

141 142

General Lo cking Guideline s

Lo cking Alternatives

 Don't hold lo cks across long duration op er-

 Co de lo cking

ations e.g., I/O that can impact p erfor-

mance

{ Asso ciate lo cks with b o dy of functions

{ Use \Tokens" instead:::

. Typically p erformed using bracketed mutex lo cks

{ Often called a monitor

 Beware of holding non-recursive mutexes when

calling a metho d outside a class

{ The metho d may reenter the mo dule and deadlo ck

 Data lo cking

{ Asso ciate lo cks with data structures and/or ob-

jects

 Don't lo ck at to o small of a level of granu-

{ Permits a more ne-grained style of lo cking

larity

 Make sure that threads ob ey the global lo ck

 Data lo cking allows more concurrency than

hierarchy

co de lo cking, but may incur higher overhead

{ But this is easier said than done:::

144 143

Passive Object Strategy

Single-l o ck Strategy

 A more OO lo cking strategy is to use a

\Passive Object"

{ Also known as a \monitor"

 One way to simplify lo cking is use a single,

application-wide mutex lo ck

 A passive object contains synchonization mech-

anisms that allow multiple metho d invo ca-

tions to execute concurrently

 Each thread must acquire the lo ck b efore

{ Either eliminate access to shared data or use syn-

running and release it up on completion

chronization objects

{ Hide lo cking mechanisms b ehind metho d interfaces

 The advantage is that most legacy co de

. Therefore, mo dules should not exp ort data di-

rectly

do esn't require changes

 The disadvantage is that paralleli sm is elim-

 Advantage is transparency

inated

{ Moreover, interactive resp onse time may degrade

 Disadvantages are increased overhead from

if the lo ck isn't released p erio dically

excessive lo cking and lack of control over

metho d invo cation order

145 146

Active Object Strategy

Invariants

 Each task is mo deled as an active object

 In general, an invariant is a condition that

that maintains its own thread of control

is always true

 Messages sent to an object are queued up

and pro cessed asynchronously with resp ect

 For concurrent programs, an invariant is a

to the caller

condition that is always true when an asso-

ciated lo ck is not held

{ i.e., the order of execution may di er from the

order of invo cation

{ However, when the lo ck is held the invariant may

b e false

{ When the co de releases the lo ck, the invariant

 This approach is more suitable to message

must b e re-established

passing-based concurrency

 e.g., enqueueing and dequeueing messages

 The ACE Task class implements this ap-

in the Message Queue class

proach

147 148

Deadlo ck

Run-time Stack Problems

 Permanent blo cking by a set of threads that

 Most threads librari es contain restrictions

on stack usage

are comp eting for a set of resources

{ The initial thread gets the \real" pro cess stack,

whose size is only limited by the stacksize limit

 Caused by \circular waiting," e.g.,

{ A thread trying to reacquire a lo ck it already holds

{ All other threads get a xed-size stack

. Each thread stack is allo cated o the heap and

{ Two threads trying to acquire resources held by

its size is xed at startup time

the other

. e.g., T and T acquire lo cks L and L in op-

1 2 1 2

p osite order

 Therefore, b e aware of \stack smashes" when

debugging multi-threaded co de

{ Overly small stacks lead to bizarre bugs, e.g.,

 One solution is to establish a global ordering

of lo ck acquisition i.e., a lo ck hierarchy

*Functions that weren't called app ear in backtraces

*Functions have strange arguments

{ May b e at o dds with encapsulation:: :

149 150

Recursive Mutex

Avoiding Deadlo ck in OO

Frameworks

 Not all thread libraries supp ort recursive mu-

texes

 Deadlo ck can o ccur due to prop erties of

{ Here is p ortable implementation available in ACE:

OO frameworks, e.g.,

class Recursive_Thread_Mutex

{ Callbacks

{

public:

// Initialize a recursive mutex.

{ Intra-class metho d calls

Recursive_Thread_Mutex void;

// Implicitly release a recursive mutex.

~Recursive_Thread_Mutex void;

// Acquire a recursive mutex.

 There are several solutions

int acquire void const;

// Conditionally acquire a recursive mutex.

{ Release lo cks b efore p erforming callbacks

int tryacquire void const;

// Releases a recursive mutex.

. Every time lo cks are reacquired it maybe nec-

int release void const;

essary to reevaluate the state of the object

private:

Thread_Mutex nesting_mutex_;

{ Makeprivate \help er" metho ds that assume lo cks

Condition mutex_available_;

are held when called by metho ds at higher levels

thread_t owner_id_;

int nesting_level_;

{ Use a Token or a Recursive Mutex

};

152 151

// Acquire a recursive mutex increments the nesting

// level and don't deadlock if owner of the mutex calls

// Releases a recursive mutex.

// this method more than once.

Recursive_Thread_Mutex::release void const

Recursive_Thread_Mutex::acquire void const

{

{

thread_t t_id = Thread::self ;

thread_t t_id = Thread::self ;

// Automatically acquire mutex.

Guard mon nesting_mutex_;

Guard mon nesting_mutex_;

// If there's no contention, grab mutex.

nesting_level_--;

if nesting_level_ == 0 {

owner_id_ = t_id;

if nesting_level_ == 0 {

nesting_level_ = 1;

// This may not be strictly necessary, but

} else if t_id == owner_id_

// it does put the mutex into a known state...

// If we already own the mutex, then

owner_id_ = OS::NULL_thread;

// increment nesting level and proceed.

nesting_level_++;

// Inform waiters that the mutex is free.

else {

mutex_available_.signal ;

// Wait until nesting level drops

}

// to zero, then acquire the mutex.

return 0;

while nesting_level_ > 0

}

mutex_available_.wait ;

Recursive_Thread_Mutex::Recursive_Thread_Mutex void

// Note that at this point

: nesting_level_ 0,

// the nesting_mutex_ is held...

owner_id_ OS::NULL_thread,

mutex_available_ nesting_mutex_

owner_id_ = t_id;

{

nesting_level_ = 1;

}

}

return 0;

153 154

Drawbacks to Multi-thread i ng

 Performance overhead

Avoiding Starvation

{ Some applications do not b ene t directly from

threads

 Starvation o ccurs when a thread never ac-

{ Synchronization is not free

quires a mutex even though another thread

{ Threads should b e created forpro cessing that lasts

p erio dically releases it

at least several 1,000 instructions

 The order of scheduling is often unde ned

 Correctness

{ Threads are not well protected against interference

from other threads

 This problem may be solved via:

{ Concurrency control issues are often tricky

{ Use of \voluntary pre-emption" mechanisms

{ Many legacy libraries are not thread-safe

. e.g., thr yield or Sleep

{ Using a \Token" that strictly orders acquisition

 Development e ort

and release

{ Develop ers often lack exp erience

{ Debugging is complicated lack of to ols

155 156

Lessons Learned using OO

Lessons Learned using OO

Frameworks

Design Patterns

 Bene ts of frameworks

 Bene ts of patterns

{ Enable direct reuse of co de cf patterns

{ Enable large-scale reuse of software architectures

{ Facilitate larger amounts of reuse than stand-alone

functions or individual classes

{ Improve development team communication

{ Help transcend language-centric viewp oints

 Drawbacks of frameworks

{ High initial learning curve

 Drawbacks of patterns

. Many classes, many levels of abstraction

{ Do not lead to direct co de reuse

{ The ow of control for reactive dispatching is non-

{ Can b e deceptively simple

intuitive

{ Teams may su er from pattern overload

{ Veri cation and validation of generic comp onents

is hard

158 157

Obtaining ACE

Lessons Learned using C++

 The ADAPTIVE Communication Environ-

 Bene ts of C++

ment ACE is an OO to olkit designed ac-

cording to key network programming pat-

{ Classes and namespaces mo dularize the system ar-

chitecture

terns

{ Inheritance and dynamic binding decouple applica-

 All source co de for ACE is freely available

tion p olicies from reusable mechanisms

{ Anonymously ftp to wuarchive.wustl.edu

{ Parameterized typ es decouple the reliance on par-

ticular typ es of synchronization metho ds or net-

{ Transfer the les /languages/c++/ACE/*.gz

work IPC interfaces

 Mailing lists

 Drawbacks of C++

* [email protected]

{ Many language features are not widely implemented

* [email protected]

* [email protected]

* [email protected]

{ Development environments are primitive

 WWW URL

{ Language has many dark corners and sharp edges

{ http://www.cs.wustl.edu/~schmidt/ACE.html

160 159

Patterns Literature

 Bo oks

{ Gamma et al., \Design Patterns: Elements of

Reusable Object-Oriented Software" Addison-Wesley,

1994

{ Pattern Languages of Program Design series by

Addison-Wesley, 1995 and 1996

{ Siemens, Pattern-Oriented Software Architecture,

Wiley and Sons, 1996

 Sp ecial Issues in Journals

{ Dec. '96 \Theory and Practice of Object Sys-

tems" guest editor: Stephen P. Berczuk

{ Octob er '96 \Communications of the ACM" guest

editors: Douglas C. Schmidt, Ralph Johnson, and

Mohamed Fayad

 Magazines

{ C++ Rep ort and Journal of Object-Oriented Pro-

gramming, columns by Coplien, Vlissides, and Mar-

tin 161