PTHREADS PROGRAMMING PDF

adminComment(0)

the POSIX committee for all their work on the pthreads draft and the threads . multithreading is a valuable addition to programming paradigms, and a number. Views 16MB Size Report. DOWNLOAD PDF . Preface Pthreads Programming Bradford Nichols, Dick Buttlar and Jacqueline Proulx Farrell Copyright © Computers are just as busy as the rest of us nowadays. They have lots of tasks to do at once, and need some cleverness to get them all done at the same time.


Pthreads Programming Pdf

Author:RICHIE PHEONIX
Language:English, German, Dutch
Country:Israel
Genre:Academic & Education
Pages:121
Published (Last):23.03.2016
ISBN:568-5-54651-175-9
ePub File Size:18.66 MB
PDF File Size:15.68 MB
Distribution:Free* [*Register to download]
Downloads:32588
Uploaded by: TESSIE

A process is an active runtime environment that accommodates a running program, providing an execution state along with certain resources, including file . The C programming language - multithreading interfaces are provided by the .. The pthreads cancellation feature permits either asynchronous or deferred. PThreads basics, Mutual Exclusion and Locks, and Examples. CSCE Parallel Programming Models for Multicore and Manycore Processors. Department of.

He also describes in a simple, clear manner what all the advanced features are for, and how threads interact with the rest of the UNIX system.

Topics include:. PThreads Programming pdf Referentie: O'Reilly Media Waarschuwing: Topics include: Basic design techniques Mutexes, conditions, and specialized synchronization techniques Scheduling, priorities, and other real-time issues Cancellation UNIX libraries and re-entrant routines Signals Debugging tips Measuring performance Special considerations for the Distributed Computing Environment DCE. Add to cart.

Downloaden 3. The third shows interleaved execution of the first routines; the last, their simultaneous execution.

All sequences produce exactly the same result. Possible sequences of the routines in the simple program An obvious reason for exploiting potential parallelism is to make our program run faster on a multiprocessor.

However, there are additional reasons for investigating a program's potential parallelism: For example, a word processor could service print requests in one thread and process a user's editing commands in another.

Asynchronous events If one or more tasks is subject to the indeterminate occurrence of events of unknown duration and unknown frequency, such as network communications, it may be more efficient to allow other tasks to proceed while the task subject to asynchronous events is in some unknown state of completion. For example, a network-based server could process in-progress requests in one group of threads while another thread waits for the asynchronous arrival of new requests from clients through network connections.

Real -time scheduling If one task is more important than another, but both should make progress whenever possible, you may wish to run them with independent scheduling priorities and policies. For example, a stock information service application could use high priority threads to receive and update displays of online stock prices and low priority threads to display static data, manage background printing, and perform other less important chores.

Threads are a means to identify and utilize potential parallelism in a program. You can use them in your program design both to enhance its performance and to efficiently structure programs that do more than one thing at a time. Specifying Potential Parallelism in a Concurrent Programming Environment Now that we know the orderings that we desire or would allow in our program, how do we express potential parallelism at the programming level?

Those programming environments that allow us to express potential parallelism are known as concurrent programming environments. A concurrent programming environment lets us designate tasks that can run in parallel. It also lets us specify how we would like to handle the communication and synchronization issues that result when concurrent tasks attempt to talk to each other and share data.

Because most concurrent programming tools and languages have been the result of academic research or have been tailored to a particular vendor's products, they are often inflexible and hard to use. Pthreads, on the other hand, is designed to work across multiple vendors' platforms and is built on top of the familiar UNIX C programming interface. Pthreads gives you a simple and portable way of expressing multithreading in your programs.

Multiple Processes Before looking at threads further, let's examine the concurrent programming interface that UNIX already supports: Example recasts our earlier single-process program as a program in which multiple processes execute its procedures concurrently.

The main routine starts ina single process which we will refer to as the parent process. The fork call creates a child process that is identical toits parent process at the time the parent called fork with the following differences: Figure shows a process as it forks.

Here, both parent and child are executing at the point in the program just following the fork call.

Interestingly, the child begins executing as if it were returning from the fork call issued by its parent. It can do so because it starts out as a nearly identical copy of its parent. The initial values of all of its variables and the state of its system resources such as file descriptors are the same as those of its parent.

A program before and after a fork If the fork call returns to both the parent and child, why don't the parent and child execute the same instructions following the fork?

UNIX programmers specify different code paths for parent and child by examining the return value of the fork call. The fork call always returns a value of 0 to the child and the child's PID to the parent. Because of this semantic we almost always see fork used as shown in Example Each process executes its own instructions serially, although the way in which the statements of each may be interwoven by concurrency is utterly unpredictable.

In fact, one process could completely finish before the other even starts or resumes, in the case in which the parent is the last to the finish line. To see what we mean, let's look at the output from some test runs of our program in Example When looking for concurrency, then, why choose multiple threads over multiple processes? The overwhelming reason lies in the single largest benefit of multithreaded programming: The operating system performs less work on behalf of a multithreaded program than it does for a multiprocess program.

This translates into a performance gain for the multithreaded program. Pthreads Concurrent Programming: Multiple Threads Now that we've seen how UNIX programmers traditionally add concurrency to a program, let's look at a way of doing so that employs threads.

Example shows how our singleprocess program would look if multiple threads execute its procedures concurrently. The program starts in a single thread, which, for reasons of clarity, we'll refer to as the main thread. For the most part, the operating system does not recognize any thread as being a parent or master thread—from its viewpoint, all threads in a process are equal.

In the same way that the processes behave in our multiprocess version of the program, each thread executes independently unless you add explicit synchronization. You provide the following arguments: Because many of these types like int reveal quite a bit about the underlying architecture of a given platform such as whether its addresses are 16, 32, or 64 bits long , POSIX prefers to create new data types that conceal these fundamental differences.

A thread attribute object specifies various characteristics for the new thread. In the example program, we pass a value of NULL for this argument, indicating that we accept the default characteristics for the new thread. A zero value represents success, and a nonzero value indicates and identifies an error. In later examples, we redeclare the routine to the correct prototype where possible.

Threads are peers In the multiprocess version of our example Example , we could refer to the caller of fork as the parent process and the process it creates as the child process. We could do so because UNIX process management recognizes a special relationship between the two.

It is this relationship that, for instance, allows a parent to issue a wait system call to implicitly wait for one of its children. The Pthreads concurrent programming environment maintains no such special relationship between threads.

We may call the thread that creates another thread the creator thread and the thread it creates the spawned thread, but that's just semantics. Creator threads and spawned threads have exactly the same properties in the eyes of the Pthreads. The only thread that has slightly different properties than any other is the first thread in the process, which is known as the main thread.

In this simple program, none of the differences have any significance. Which will run first? Will one run to completion before the others, or will their execution be interleaved? It depends on the default scheduling policies of the underlying Pthreads implementation. It could be predictable, but then again, it may not be. The output on our system looks like this: Parallel vs.

Concurrent Programming Let's make a distinction between concurrent and parallel programming for the remainder of the book. We'll use concurrent programming in a general sense to refer to environments in which the tasks we define can occur in any order. One task can occur before or after another, and some or all tasks can be performed at the same time. We'll use parallel programming to specifically refer to the simultaneous execution of concurrent tasks on different processors.

Thus, all parallel programming is concurrent, but not all concurrent programming is parallel. The Pthreads standard specifies concurrency; it allows parallelism to be at the option of system implementors. As a programmer, all you can do is define those tasks, or threads, that can occur concurrently. Whether the threads actually run in parallel is a function of the operating system and hardware on which they run.

Because Pthreads was designed in this way, a Pthreads program can run without modification on uniprocessor as well as multiprocessor systems. Okay, so portability is great, but what of performance? All of our Pthreads programs will be running with specific Pthreads libraries, operating systems, and hardware. To squeeze the best performance out of a multithreaded application, you must understand the specifics of the environment in which it will be running—especially those details that are beyond the letter of the standard.

We'll spend some time in the later sections of this book identifying and describing the implementation-specific issues of Pthreads. Synchronization Even in our simple program, in Examples through , some parts can be executed in any order and some cannot. We must force an order upon the events in our program, or synchronize them, to guarantee that the last routine executes only after the first two have completed.

PThreads Programming (pdf)

In threads programming, we use synchronization to make sure that one event in one thread happens before another event in another thread. A simple analogy would involve two people working together to jump start a car, one attaching the cables under the hood and one in the car getting ready to turn the key.

The two must use some signal between them so that the person connecting the cables completes the task before the other turns the key. This is real life synchronization. In general, cooperation between concurrent procedures leads to the sharing of data, files, and communication channels.

This sharing, in turn, leads to a need for synchronization. For instance, consider a program that contains three routines.

Two routines write to variables and the third reads them. For the final routine to read the right values, you must add some synchronization. Almost all of the other function calls are there to replace the synchronization that was inherent in the program when it executed serially—and slowly! The waitpid call provides synchronization by suspending its caller until a child process exits. Notice that we use the waitpid call only in the code path of the parent. Both the multiprocess and multithreaded versions of our program use coarse methods to synchronize.

One process or thread just stalled until the others caught up and finished. In later sections of this book we'll go into great detail on the finer methods of Pthreads synchronization, namely mutex variables and condition variables.

The finer methods allow you to synchronize thread activity on a thread's access to one or more variables, rather than blocking the execution of an entire routine and thread in which it executes.

Using the finer synchronization techniques, threads can spend less time waiting on each other and more time accomplishing the tasks for which they were designed. As a quick introduction to mutex variables, let's make a slight modification to the Pthreads version of our simple program. In Example , we'll add a new variable, r3. Because all routines will read from and write to this variable, we'll need some synchronization to control access to it.

Just as a thread can have a thread attribute object, a mutex can have a mutex attribute object that indicates its special characteristics. Here, too, we'll pass a value of NULL for this argument, indicating that we accept the default characteristics for the new mutex. We'll synchronize their access to r3 by using the mutex we created in the main thread. Concurrent Threads and a Mutex: Independent processes share nothing. Threads share such process resources as global variables and file descriptors.

If one thread changes the value of any such resource, the change will be evident to any other thread in the process, if anyone cares to look. The sharing of process resources among threads is one of the multithreaded programming model's major performance advantages, as well as one of its most difficult programming aspects. Having all of this context available to all threads in the same memory facilitates communication between threads. The way this works is pretty simple.

The values of the output parameters are set to their final value and can be used. The processes in the multiprocess version of our program also use shared memory, but the program must do something special so that they can use it. We used the System V shared memory interface. Before it creates any child processes, the parent initializes a region of shared memory from the system using the shmget and shmat calls.

After the fork call, all the processes of the parent and its children have common access to this memory, using it in the same way as the multithreaded version uses global variables, and all the parent and children processes can see whatever changes any of them may make to it.

Communication When two concurrent procedures communicate, one writing data and one reading data, they must adopt some type of synchronization so that the reader knows when the writer has completed and the writer knows that the reader is ready for more data. Some programming environments provide explicit communication mechanisms such as message passing.

The Pthreads concurrent programming environment provides a more implicit some would call it primitive mechanism. Threads share all global variables. This affords threads programmers plenty of opportunities for synchronization. The multiprocess version of our program uses shared memory, but the other methods are equally valid. Even the waitpid call in our program could be used to exchange information, if the program checked its return value.

However, in the multiprocess world, all types of IPC involve a call into the operating system—to initialize shared memory or a message structure, for instance. This makes communication between processes more expensive than communication between threads. Scheduling We can also order the events in our program by imposing some type of scheduling policy on them. Unless our program is running on a system with an infinite number of CPUs, it's a safe bet that, sooner or later, there will be more concurrent tasks ready to run in our program than there are CPUs available to run them.

The operating system uses its scheduler to select from the pool of ready and runnable tasks those that it will run. In a sense, the scheduler synchronizes the tasks' access to a shared resource: Neither the multithreaded version of our program nor the multiprocess version imposes any specific scheduling requirements on its tasks.

POSIX defines some scheduling calls as an optional part of its Pthreads package, allowing you to select scheduling policies and priorities for threads. Who Am I? Code that Examines the Identity of the Calling Thread ident. Terminating Thread Execution A process terminates when it comes to the end of main. At that time the operating system reclaims the process's resources and stores its exit status. Similarly, a thread exits when it comes to the end of the routine in which it was started.

By the way, all threads expire when the process in which they run exits. When a thread terminates, the Pthreads library reclaims any process or system resources the thread was using and stores its exit status.

You might also like: UNIX SHELL PROGRAMMING EBOOK

In any of these cases, the Pthreads library runs any routines in its cleanup stack and any destructors in keys in which it has store values. We'll describe these features in Chapter 4, Managing Pthreads.

Exit Status and Return Values The Pthreads library may or may not save the exit status of a thread when the thread exits, depending upon whether the thread is joinable or detached. A joinable thread, the default state of a thread at its creation, does have its exit status saved; a detached thread does not. Detaching a thread gives the library a break and lets it immediately reclaim the resources associated with the thread.

In Chapter 4, we'll show you how to create a thread in the detached state by specifying attribute objects. What is the exit status of a thread?

You can associate an exit status with a thread in either of two ways: However, you'll often find that your thread-start routines must return something other than an address—e. Of course, if the thread running the thread-start routine cannot be canceled peek ahead to Chapter 4 to learn a bit about cancellation , you can ignore this restriction.

Its purpose is to allow a single thread to wait on another's termination. Pthreads Library Calls and Errors Most Pthreads library calls return zero on success and an error number otherwise. You can use code similar to that in Example to perform error checking on a Pthreads call. If your platform supports a routine to convert error numbers to a readable string such as the XPG4 call, strerror, your code could be simplified as in Example What you do depends upon what your program is doing and what type of error it encounters.

As you may have noticed, we normally don't test the return values of the Pthreads library calls we make in the code examples in this book.

We felt that doing so would get in the way of the threads programming practices the examples are meant to illustrate. If we were writing this code for a commercial product, we would diligently perform all required error checking. Why Use Threads Over Processes? If both the process model and threads model can provide concurrent program execution, why use threads over processes?

Creating a new process can be expensive. It takes time. A call into the operating system is needed, and if the process creation triggers process rescheduling activity, the operating system's context-switching mechanism will become involved.

It takes memory. The entire process must be replicated. Add to this the cost of interprocess communication and synchronization of shared data, which also may involve calls into the operating system kernel, and threads provide an attractive alternative.

Threads can be created without replicating an entire process. Furthermore, some, if not all, of the work of creating a thread is done in user space rather than kernel space. When processes synchronize, they usually have to issue system calls, a relatively expensive operation that involves trapping into the kernel. But threads can synchronize by simply monitoring a variable—in other words, staying within the user address space of the program.

We'll spell out the advantages of threads over the multiprocess model of multitasking in our performance measurements in Chapter 6, Practical Considerations. In the meantime, we'll show you how to build a multithreaded program. Pthreads offers a clean, consistent way to address all of these motivations. If you're a disciplined programmer, designing and coding a multithreaded program should be easier than designing and coding a multiprocess program.

Now, we know that the example program we've been looking at in this chapter is far too simple to convince anyone that a particular programming style is more structured or elegant than another. Subsequent examples will venture into more complex territory and, in doing so, illustrate Pthreads mechanisms for a more practical set of coding problems. We hope that they may make the case for Pthreads.

Choosing Which Applications to Thread The major benefit of multithreaded programs over nonthreaded ones is in their ability to concurrently execute tasks. However, in providing concurrency, multithreaded programs introduce a certain amount of overhead. If you introduce threads in an application that can't use concurrency, you'll add overhead without any performance benefit.

So what makes concurrency possible? First, of course, your application must consist of some independent tasks—tasks that do not depend on the completion of other tasks to proceed. Secondly, you must be confident that concurrent execution of these tasks would be faster than their serial execution.

On a multiprocessor, even CPU-bound tasks can benefit from concurrency because they can truly proceed in parallel. Multiprocessing UNIX hosts are not restricted to exotic scientific number crunching anymore as two- to four-CPU server and desktop platforms have become commonplace.

If your application has been designed to use multiple processes, it's likely that it would benefit from threading.

A common design model for a UNIX server daemon is to accept requests and fork a child image of itself to process the request. If the benefits of concurrency outweighed the overhead of using separate processes in the application, threading is bound to improve its performance because threads involve less overhead.

The remaining class of applications that can benefit from threads are those that execute on multiprocessing systems. Purely CPU-bound applications can achieve a performance boost with threads.

However, on a multiprocessor, this same application could speed up dramatically as the threads performed their computations in parallel. As we'll see in Chapter 6, there are commonly three different types of Pthreads implementations. To take full advantage of a multiprocessing system, you'll need an implementation that's sophisticated enough to allow multiple threads in a single process to access multiple CPUs.

Chapter 2: Designing Threaded Programs Overview So far you've seen only a couple of Pthreads calls, but you know enough about the programming model to start considering real design issues.

In this chapter we'll examine a number of broad questions. How much work is worth the overhead of creating a thread? How can I subdivide my program into threads? What relationship do the threads have to the functions in my program?

To give us a sample application worth threading, we'll introduce an application that will take us through most of the book: We'll try out our design ideas on this server. Suitable Tasks for Threading To find the places in your program where you can use threads, you essentially must look for the properties we identified in Chapter 1, Why Threads?: Whenever a task has one or more of these properties, consider running it in a thread.

You can identify a task that is suitable for threading by applying to it the following criteria: Does the task use separate resources from other tasks?

Does its execution depend on the results of other tasks? Do other tasks depend on its results? We want to maximize concurrency and minimize the need for synchronization. The more tasks depend on each other and share resources, the more the threads executing them will end up blocked waiting on each other.

Can the task spend a long time in a suspended state? Does the task perform long computations, such as matrix crunching, hashing, or encryption? Time-consuming calculations that are independent of activities elsewhere in the program are good candidates for threading. In a multiprocessing environment, you might let a thread executing on one CPU process a long computation while other threads on other CPUs handle input. Must the task handle events that occur at random intervals, such as network communications or interrupts from hardware and the operating system?

Use threads to encapsulate and synchronize the servicing of these events, apart from the rest of your application. Must the task perform its work in a given amount of time? Must it run at specific times or specific time intervals? Is its work more time critical than that of other tasks? Scheduling considerations are often a good reason for threading a program. For instance, a window manager application would assign a high priority thread to user input and a much lower priority thread to memory garbage collection.

Server programs—such as those written for database managers, file servers, or print servers—are ideal applications for threading. They must be continuously responsive to asynchronous events—requests for services coming over communications channels from a number of client programs. Computational and signal-processing applications that will run on multiprocessing systems are another good candidate for threading.

Finally, real-time developers are attracted to threads as a model for servers and multiprocessing applications.

Multithreaded applications are more efficient than multiprocess applications. The threads model also allows the developers to set specific scheduling policies for threads. What's more, threads eliminate some of the complexity that comes with asynchronous programming. Threads wait for events whereas a serial program would be interrupted and would jump from context to context. Models There are no set rules for threading a program. Every program is different, and when you first try to thread a program, you'll likely discover that it's a matter of trial and error.

You may initially dedicate a thread to a particular task only to find that your assumptions about its activity have changed or weren't true in the first place. Over time a few common models for threaded programs have emerged. These models define how a threaded application delegates its work to its threads and how the threads intercommunicate. Because the Pthreads standard imposes little structure on how programmers use threads, you would do well to start your multithreaded program design with a close look at each model.

Although none has been explicitly designed for a specific type of application, you'll find that each model tends to be better suited than the others for certain types. We discuss: A single thread, the boss , accepts input for the entire program. Based on that input, the boss passes off specific tasks to one or more worker threads. The boss creates each worker thread, assigns it tasks, and, if necessary, waits for it to finish.

In the pseudo code in Example , the boss dynamically creates a new worker thread when it receives a new request.

After creating each worker, the boss returns to the top of its loop to process the next request. If no requests are waiting, the boss loops until one arrives. Once finished, each worker thread can be made responsible for any output resulting from its task, or it can synchronize with the boss and let it handle its output. Alternatively, the boss could save some run-time overhead by creating all worker threads up front.

Each worker immediately suspends itself to wait for a wake-up call from the boss when a request arrives for it to process. The boss advertises work by queuing requests on a list from which workers retrieve them.

The complexities of dealing with asynchronously arriving requests and communications are encapsulated in the boss. The specifics of handling requests and processing data are delegated to the workers. In this model, it is important that you minimize the frequency with which the boss and workers communicate. The boss can't spend its time being blocked by its workers and allow new requests to pile up at the inputs.

Likewise, you can't create too many interdependencies among the workers. If every request requires every worker to share the same data, all workers will suffer a slowdown. The peer model In the peer model, also known as the workcrew model, one thread must create all the other peer threads when the program starts. A peer knows its own input ahead of time, has its own private way of obtaining its input, or shares a single point of input with other peers.

The structure of such a program is shown in Example Because there is no boss, peers themselves must synchronize their access to any common sources of input. Consider an application in which a single plane or space is divided among multiple threads, perhaps so they can calculate the spread of a life form such as in the SimLife computer game or changes in temperature as heat radiates across geographies from a source.

Each thread can calculate one delta of change. However, because the results of each thread's calculations require the adjustment of the bounds of the next thread's calculations, all threads must synchronize afterward to exchange and compare each other's results. This is a classic example of a peer model application. Pipeline Model The pipeline model assumes: Each goes through a series of stages on its way to the exit gates.

Winkelinformatie

At any given time many cars are in some stage of completion. A RISC reduced instruction set computing processor also fits the pipeline model. The input to this pipeline is a stream of instructions. Each instruction must pass through the stages of decoding, fetching operands, computation, and storing results.

That many instructions may be at various stages of processing at the same time contributes to the exceptionally high performance of RISC processors. In each of these examples, a pipeline improves throughput because it can accomplish the many different stages of a process on different input units be they cars or instructions concurrently.

Instead of taking each car or instruction from start to finish before starting the next, a pipeline allows as many cars or instructions to be worked on at the same time as there are stages to process them. It still takes the same amount of time from start to finish for a specific car that red one, for instance or instruction to be processed, but the overall throughput of the assembly line or computer chip is greatly increased.

Figure shows a thread pipeline. A thread pipeline As the pseudocode in Example illustrates, a single thread receives input for the entire program, always passing it to the thread that handles the first stage of processing. Similarly a single thread at the end of the pipeline produces all final output for the program. Each thread in between performs its own stage of processing on the input it received from the thread that performed the previous stage, and passes its output to the thread performing the next.

Applications in which the pipeline might be useful are image processing and text processing or any application that can be broken down into a series of filter steps on a stream of input.

Pthreads Programming: A POSIX Standard for Better Multiprocessing

We could also dynamically configure the pipeline at run time, having it create and terminate stages and the threads to service them as needed. Note that the overall throughput of a pipeline is limited by the thread that processes its slowest stage. Threads that follow it in the pipeline cannot perform their stages until it has completed.

When designing a multithreaded program according to the pipeline model, you should aim at balancing the work to be performed across all stages; that is, all stages should take about the same amount of time to complete.

Within any of these models threads transfer data to each other using buffers. In the pipeline model, each thread must pass input to the thread that performs the next stage of processing.

Even in the peer model, peers may often exchange data. A thread assumes either of two roles as it exchanges data in a buffer with another thread. The thread that passes the data to another is known as the producer; the one that receives that data is known as the consumer: Figure depicts this relationship.

Post navigation

A buffer The buffer can be any data structure accessible to both the producer and the consumer. This is a simple matter for a multithreaded program, for a such a shared buffer need only be in the process's global data region.

The buffer can be just big enough to hold one data item or it can be larger, depending upon the application. A lock Because the buffer is shared, the producer and consumer must synchronize their access to it.

With Pthreads, you would use a mutex variable as alock. If so, the producer must be able to resume it when it places a new item in the buffer. With Pthreads, you would arrange this mechanism using a condition variable. State information Some flag or variable should indicate how much data is in the buffer. In the pseudocode in Example , the producer thread takes a lock on the shared buffer, places a work item in it, releases the lock, and resumes the consumer thread. The consumer thread is more complex.

It first takes a lock on the shared buffer. If it finds the buffer empty, it releases the lock thus giving the producer a chance to populate it with work and hibernates. When the consumer thread awakens, it reacquires the lock, and removes a work item from the buffer. In this case the producer and consumer must agree upon a mechanism for keeping track of how many items are currently in the buffer.

Using double buffering, threads act as both producer and consumer to each other. In the example of double buffering shown in Figure ,one set of buffers contains unprocessed data and another set contains processed data. In other words, it's the producer of unprocessed data. That is, it's the consumer of processed data.

POSIX thread (pthread) libraries

The calculating thread is thus the consumer of unprocessed data and the producer of processed data. Double buffering Books24x7. Some Common Problems Regardless of the model you select, a few classes of bugs creep into nearly every threaded application at some point during its development. Avoiding them takes a lot of concentration. Finding them once they've crept in requires patience and persistence.

Most bugs result from oversights in the way the application manages its shared resources. Either you forget to keep one thread out of a resource while another is modifying it, or the way in which you attempt to synchronize access to the resource causes your threads to hang. We'll walk through a debugging session for a multithreaded program in Chapter 6, Practical Considerations. For the time being, we'll rest content with pronouncing a few basic rules and noting the most common pitfalls.

The basic rule for managing shared resources is simple and twofold: When in doubt, assume that they are not thread-safe until proven otherwise. This can be done by "serializing" the calls to the uncertain routine, etc.

Because of this, a program that runs fine on one platform, may fail or produce wrong results on another platform. For example, the maximum number of threads permitted, and the default thread stack size are two important limits to consider when designing your program. Several thread limits are discussed in more detail later in this tutorial. Copies of the standard can be downloadd from IEEE or downloaded for free from other sites online.

The subroutines which comprise the Pthreads API can be informally grouped into four major groups: Thread management: Routines that work directly on threads - creating, detaching, joining, etc. Mutexes: Routines that deal with synchronization, called a "mutex", which is an abbreviation for "mutual exclusion".Each procedure copies its arguments into a buffer and passes the buffer to the communication module for transmission to the server.

The initial values of all of its variables and the state of its system resources such as file descriptors are the same as those of its parent. Unfortunately, as we'll see, almost any type of quirky behavior might be regarded as a symptom of a synchronization failure. We'll run some tests on various versions of our ATM server to test their performance as contention and workload increase.

Does the task perform long computations, such as matrix crunching, hashing, or encryption? The consumer thread is more complex. You can control this expense to some extent by economizing on the length of time a thread spends in a critical section of code. We could also dynamically configure the pipeline at run time, having it create and terminate stages and the threads to service them as needed.

When we used them to control the number of concurrent workers, the boss thread requested a signal when the active worker count was one less than the active worker limit.