With respect to state, threads and processes differ in the ability of each to share state. Threads can easily share state, as long as it's with other threads in the same process. Processes have a more difficult (but not impossible) time sharing state with other processes.
If your colleague's server was stateless, there would be nothing to share and processes would do just as well as threads. Given your colleague's claim, it seems that your colleague's server is stateful.
Most of the answers to this question fell short in one way or another. The reliability of TCP vs. UDP was often raised but not explained well. Several answers claimed that the server necessarily could not store state, but the reasoning behind this claim was unclear (to me, at least).
Yes, it can; see Section 10.8 in Comer and Stevens. UDP has no formal notion of a connection, but the presence of a message, any message, at the server is enough to trigger the service.
Most people got this question wrong, about half making the same mistake I made when I first Section 10.8. Most of the rest of the wrong answers claimed it was impossible because the server wouldn't know where to send the reply, forgetting that there are several system calls in the recv family that return the sender's address along with the data sent.
select()
, poll()
, or other facilities for managing
many active file descriptors, does it make sense for a client to implement
concurrency using threads instead of iterative multiplexing? Explain.
Select-like facilities solve the problem of determining which file descriptors are ready to be read (or other important state) without committing to the read. This facility is essential to iterative multiplexing, because there's only one thread and committing the thread to a non-read-ready file descriptor blocks the server until the file descriptor becomes read-ready.
Threads may or may not have a problem with premature committing, depending on
how I-O behaves. If only individual threads block on unready sockets, then a
select-like feature isn't necessary. If, however, a single thread can block
the whole process, including all the other threads, then something such as
select()
will be needed to maintain high concurrency levels.
Most of the answers to this question were mostly correct, although few zeroed directly in on the answer. Some answers dealt with I-O- vs. compute-bound services, which is an important point but somewhat to the side of what the question's asking for.
The principle concurrency-management cost in the boss-worker model is the cost of creating and deleting the workers. Synchronization management is a secondary cost, because it's less expensive than is process create-delete costs, and because the boss-worker uses simple synchronization.
Because sockets cannot be shared between existing processes, a TCP-based server using the boss-worker model has to create a worker for every TCP connection accepted, and then delete the worker when the connection's closed. In contrast, A UDP-based boss-worker server can create the workers once, and then forward the messages to them for further processing, trading off the more expensive create-delete costs for the cheaper inter-process communication costs.
Many answers to this question didn't deal with concurrency management at all, which is not a good way to answer a question about concurrency management. Instead, answers dealt with such issues as transmission reliability, load balancing, and server deadlock. It's not that these issues are unimportant; it's just that they aren't too relevant to concurrency-management efficiency.
This page last modified on 3 March 2004.