The Thread-Pool Concurrency Model

The Thread-Pool Concurrency Model

Object Interconnections Comparing Alternative Programming Techniques for Multi-threaded CORBA Servers: Thread Pool (Column 6) Douglas C. Schmidt Steve Vinoski [email protected] [email protected] Department of Computer Science Hewlett-Packard Company Washington University, St. Louis, MO 63130 Chelmsford, MA 01824 This column will appear in the April 1996 issue of the SIGS C++ Report magazine. QUOTE SERVER : Request 3: ENQUEUE Queue REQUEST 1 Introduction pool 4: DEQUEUE & Modern OS platforms like Windows NT, and OS/2 and many thread main PROCESS thread flavors of UNIX provide extensive library and system call REQUEST support for multi-threaded applications. However, program- pool ming multi-threaded applications is hard and programming thread distributed multi-threaded applications is even harder. In par- pool 2: RECEIVE ticular, developers must address sources of accidental and thread REQUEST inherent complexity: Accidental complexity of multi-threaded programming 5: RETURN QUOTE VALUE arises from limitations with programming tools and design techniques. For example, many debuggers can’t handle 1: REQUEST threaded programs and can’t step across host boundaries. QUOTE SERVER Likewise, algorithmic design [1] makes it hard to reuse ap- CLIENT plication components because it tightly couples the structure CLIENT CLIENT of a threaded application to the functions it performs. Inherent complexity of multi-threaded programming arises from challenges such as avoiding deadlock and live- lock, eliminating race conditions for shared objects, and Figure 1: thread pool Architecture for the Stock Quote minimizing the overhead of context switch, synchronization, Server and data movement. An inherently complex aspect of pro- gramming multi-threaded distributed applications (particu- larly servers) involves selecting the appropriate concurrency 2 The Thread Pool Concurrency model, which is the focus of this column. Model Our previous column examined several ways to program The thread pool concurrency model is a variation of the multi-threaded stock quote servers using C, C++ wrappers, thread-per-request we examined last column. The main ad- and two versions of CORBA (HP ORB Plus and MT Or- vantage of thread-per-request is its simplicity, which is why bix). In that column, we focused on the thread-per-request it’s used in many multi-threaded ORBs (such as Orbix and concurrency model, where every incoming request causes a HP ORB Plus). However, dynamically spawning a thread to new thread to be spawned to process it. This column exam- handle each new request causes excessive resource utiliza- ines and evaluates another concurrency model: thread pool, tion if the number of requests becomes very large and the which pre-spawns a fixed number of threads at start-up to OS resources required to support threads don’t scale up effi- service all incoming requests. We illustrate this model by ciently. developing new multi-threaded C, C++, and CORBA imple- The thread pool model avoids this overhead by pre- mentations of the stock quote server. spawning a fixed number of threads at start-up to service all 1 incoming requests. This strategy amortizes the cost of thread HANDLE listener = create_server_endpoint (port); creation and bounds the use of OS resources. Client requests Handle_Queue handle_queue; can execute concurrently until the number of simultaneous requests exceeds the number of threads in the pool. At this /* Initialize the thread-safe message queue. */ handle_queue_init (&handle_queue); point, additional requests must be queued (or rejected) until a thread becomes available. /* Initialize the thread pool. */ thread_pool_init (&handle_queue, pool_size); Figure 1 illustrates the main components in this concur- rency model. These components include a main thread,a /* The event loop for the main thread. */ svc_run (&handle_queue, listener); request queue, and a set of pool threads. The main thread /* NOTREACHED */ receives new requests and inserts them into the tail of the } request queue, while the pool threads remove requests from the head of the queue and service them. We’ll explore the The svc run function runs the main thread’s event loop, implementation and use of these components in this column as follows: using C, C++ wrappers, and CORBA, respectively. void svc_run (Handle_Queue *handle_queue, HANDLE listener) { 3 The Multi-threaded C Thread Pool /* Main event loop. */ for (;;) { Solution /* Wait to accept a new connection. */ HANDLE handle = accept (listener, 0, 0); 3.1 C Code /* Enqueues the request for processing by a thread in the pool. */ The following example shows a solution written using C, handle_queue_insert (handle_queue, handle); } sockets, and Solaris threads [2] 1 As in previous columns, we /* NOTREACHED */ use a set of C utility functions to receive stock quote requests } from clients (recv request), look up quote information (lookup stock price), and return the quote to the client The main thread runs an event loop that continuously accepts (send response). new connections from clients and enqueues each connec- tioninaHandle Queue, which is a thread-safe queue of /* WIN32 already defines this. */ HANDLEs. Subsequently, a thread in the thread pool will re- #if defined (unix) typedef int HANDLE; move the HANDLE from the queue, extract the client’s stock #endif /* unix */ quote request, look up the result, and return the result to the HANDLE create_server_endpoint (u_short port); client. int recv_request (HANDLE, struct Quote_Request *); The Handle Queue plays several roles in this design. int send_response (HANDLE, long stock_value); int handle_quote (HANDLE); First, it decouples the main thread from the pool threads. This allows multiple pool threads to be active simultaneously These functions were first implemented in the October 1995 and offloads the responsibility for maintaining the queue issue of the C++ Report and were revised to become thread- from kernel-space to user-space. Second, it enforces flow safe in the February 1996 issue. control between clients and the server. When there’s no more room in the queue, the main thread blocks, which will “back- 3.1.1 The main() Thread propagate” to the clients, preventing them from establishing new connections. New connection requests will not be ac- Our server main is similar to the one we presented for the cepted until pool threads have a chance to catch up and can multi-threaded C solution in our last column. The key dif- unblock the main thread. ference is that we don’t dynamically spawn a thread for each Each thread in the thread pool is spawned by the new client request. Instead, we create a thread-safe message thread pool init function: queue, a pool of threads, and start an event loop in the main void thread, as shown below: thread_pool_init (Handle_Queue *handle_queue, int pool_size) const int DEFAULT_PORT = 12345; { const int DEFAULT_POOL_SIZE = 4; int i; int main (int argc, char *argv[]) for (i = 0; i < pool_size; i++) { /* Spawn off the thread pool. */ u_short port = /* Port to listen for connections. */ thr_create argc > 1 ? atoi (argv[1]) : DEFAULT_PORT; (0, /* Use default thread stack. */ int pool_size = /* Size of the thread pool. */ 0, /* Use default thread stack size. */ argc > 2 ? atoi (argv[2]) : DEFAULT_POOL_SIZE; &pool_thread, /* Entry point. */ (void *) handle_queue, /* Entry point arg. */ /* Create a passive-mode listener endpoint. */ THR_DETACHED | THR_NEW_LWP, /* Flags. */ 0); /* Don’t bother returning thread id. */ 1 Porting our implementation to POSIX pthreads or Win32 threads is } straightforward. } 2 3.1.2 The pool thread() Function void handle_queue_init (Handle_Queue *handle_queue, u_int max) Each newly created thread executes the following event loop { handle_queue->max_count_ = max; in the pool thread function: handle_queue->count_ = 0; handle_queue->head_ = handle_queue->tail_ = 0; void *pool_thread (void *arg) { /* Initialize synchronization variables that Handle_Queue *handle_queue = are local to a single process. */ (Handle_Queue *) arg; mutex_init (&handle_queue->lock_, USYNC_THREAD, 0); /* The event loop for the each cond_init (&handle_queue->notempty_, thread in the thread pool. */ USYNC_THREAD, 0); cond_init (&handle_queue->notfull_, for (;;) { USYNC_THREAD, 0); HANDLE handle; } /* Get next available HANDLE. */ handle_queue_remove (handle_queue, &handle); Three synchronization variables are used to implement the thread-safe Handle Queue: two condition variables /* Return stock quote to client. */ handle_quote (handle); (cond t notempty and notfull ) and one mutex (mutex t lock ). The condition variables enable threads /* Close handle to prevent leaks. */ close (handle); to insert and remove HANDLEs to and from the queue con- } currently. The mutex lock is used by the condition vari- /* NOTREACHED */ return 0; ables to serialize access to the internal state of the queue, as } shown in the handle queue insert function below: When a pool thread becomes available, it will dequeue the void handle_queue_insert (Handle_Queue *handle_queue, next handle (corresponding to a client request), use it to HANDLE handle) look up the value of the stock quote, and return the quote to { /* Ensure mutual exclusion for queue state. */ the client. mutex_lock (&handle_queue->lock_); /* Wait until there’s room in the queue. */ 3.1.3 The Thread-Safe Handle Queue while (handle_queue->count_ == handle_queue->max_count_) Most of the complexity in the thread pool implementation cond_wait (&handle_queue->notfull_, &handle_queue->lock_); resides in the thread-safe

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us