CS 537, Client-Server Interfaces

Spring 2002 - Test 2


  1. Explain how a shared address space is both a help and a hindrance when using threads to provide server concurrency.


    The main help that a shared address space provides to threads is free access to shared state. Two threads running in the same address space can access shared state by accessing the same address (the address of the shared state) using the normal mechanisms (read and write). Sharing state between two threads in separate address spaces is more expensive and cumbersome, requiring system calls and slow data transfers between address spaces.

    The hindrances that shared address space causes are essentially the flip side of the help it provides: there is no protection or isolation in a shared address space. A misbehaving thread can corrupt the state needed by all threads in the computation. A computation with separate address spaces can isolate the effects of each thread to its own address space. Apart from malignant communications with other address spaces in the computation (and various sorts of resource abuses), the effects of the misbehaving thread are isolated to its own address space.

    Some people tried to claim that the need to snychronize was a hinderance. But, whenever data are shared there's a need to synchronize, so it really has nothing to do with shared memory (although you'd expect shared-memory synchronization to be more efficient than inter-process synchronization supported by the OS, which is an advantage, not a hinderance).


  2. Under which circumstances will a single-threaded server perform no worse than a server implementing concurrency with multiple processes?


    Assuming each server performs the same amount of work processing each request, the single-threaded server's performance will always be better than a multi-process server's performance. The single-threaded server doesn't have to pay the overhead of managing the worker processes, which means it can turn around requests more quickly than does a multi-process server.

    Unfortunately, the analysis in the previous paragraph holds only when requests arrive at a rate slower than the server's ability to process them. Once the requests start to queue up, however, the advantages reverse and the multi-process server starts to perform better than the single-threaded server.

    A number of people forgot to include some mention of the arrival rate in their answer; other people got the relation backward (single-threaded servers are better at higher arrival rates). To see the importance, imagine it takes one minute to service a request, and one minute of overhead to manage a worker process. A single-threaded server can crank out one reply per minute, while a fork-concurrent server takes two minutes. However, as soon as the request arrival rate exceedes one per minute, a queue builds up at the single-threaded server, which increases the effective service time back at the client by the average length of the queue. The fork-concurrent server, however, can continue to crank out replies at an effective service time of one every two minutes.


  3. Explain how a single-threaded server can benefit from pre- and delayed-allocation.


    The apparent contradiction in this question involves allocation (either pre- or delayed), which creates worker computations, and single-computation servers, which do not have worker computations. How can a server pre- or delay allocation of something it never uses? One way to answer this question is to consider what happens after a worker computation gets allocated.

    A worker computation, once allocated, has to set up the state it needs to respond to the request. Pre-allocation in single-computation servers no longer refers to worker computations, but it still can refer to the state initialization required to respond to requests. Much of the computational state requires system calls - dynamic memory, synchronization mechanisms, sockets - which can be expensive. Pre-allocating these resource when no request is waiting can speed up the response to a request when it eventually arrives at the server.

    A single-computation server can use delayed allocation in the same way a multi-computation server does: as a way more effective way to control resource use when replying to responses. Resources limits usually make it impractical to pre-allocate enough state to handle worst-case demand, and the relative infrequency with which the worst-case demand occurs suggests that it might be wise to delay allocations past a certain point to make sure the demand is long lived enough to cover the cost of extra resource management.

    A lot of people just ignored the single-threaded part of this question and answered as if the server was multi-computation. Naught naughty.


  4. Explain how client-side concurrency can eliminate potential deadlocks.


    Client-side deadlock can occur when a client hangs waiting for a reply to a request. The difficulty with this kind of deadlock is that it is difficult for the client to determine if source of the delay is the network or the server (or even network processing on the local host). Whatever the cause, the effect on the client is the same - progress stops. (Strictly speaking, deadlock requires no possible chance of further progress rather than uncertainty about further progress. With respect to the client however, the weaker definition of more appropriate due to the importance of fast response times.)

    Concurrency can help deal with this form of client-side deadlock in a number of ways. First, issuing concurrent requests to separate servers reduces the possibility that misbehaving servers will prevent client progress; all the client needs to do is connect to one working server. Second, concurrency makes it easy to set-up watchdog processes in the client that can monitor the progress of requests and handle any that appear to be sliding into deadlock.

    Most people got this mostly right, although most people missed the need for some kind of watchdog process to insure progress.



This page last modified on 11 April 2002.