CS 537, Client-Server Interfaces

Spring 2003 - Test 1


  1. The Accordion server, as specified, is a stateful server. Describe the smallest set of changes required to the messages in the client-server protocol that would allow the Accordion server to be implemented without state.

    Hint: The answer to this question should be about as long as the answer to every other question in this test.


    If the server can't keep any state, then the clients have to, which means the messages are responsible for shipping the state back and forth between the client and server. The client doesn't care what the state is; it just acts as a storage place. At a minimum then, each message would need two new fields: one to indicate the size of the state and the other for the state itself. In addition, because the game id serves largely as a key to let the server retrieve the proper state, you could also remove game-id field from those messages that have it.

    Most people got this question, although they only described the changes that would be made to one or two of the messages.


  2. Explain the distinguishing features of TCP vs. UDP with respect to the possibility of server deadlock.


    UDP is all or nothing delivery; a UDP packet either makes it to the recipient in one piece or it doesn't make it at all. The same with sending a UDP message; the message either goes out or there's something wrong locally. Because UDP is atomic (either it happens or it doesn't, there's no intermediate state) and local (it happens or it doesn't because of local conditions, not conditions on the other end of the connection), there is no possibility of deadlock over UDP connections.

    TCP, on the other hand, has problems in both directions. Because TCP provides a byte-stream service, messages are delivered non-atomically, and may also be sent non-atomically. The potential for partial transmissions means the recipient could be hung up waiting for the rest of a message that may never come. In addition, because TCP provides end-to-end reliability, which means the state of the remote end could effect the state of the local end. In particular, if the remote end stops receiving data while the local end keeps on sending, buffers along the connection will eventually fill, and the local end will block. If both ends continually send, traditional deadlock will ensue.

    This question turned out to be tricky because of the slippery definition of deadlock. However, many people confused denial-of-service attacks with deadlock. It is true that clients can repeatedly make connections to a server until the server's resources are exhausted. However, even though no new clients can connect to the server, the server is still operational and can service requests from the connected clients.


  3. A colleague of yours thought up this idea for reducing the synchronization overhead in boss-worker server structures: fork the socket in each worker. All the workers wait on the same socket and one of them wins, leaving the others to continue competing (and to eventually rejoin the competition when it finishes its request).

    Describe how this idea would work with TCP. Describe how this idea would work with UDP. Make sure you clearly identify your assumptions.


    With TCP, the workers can compete for the listener socket or the active sockets returned by accept(). It is clear how the workers would get access to the listener socket: the boss forks the workers after opening and initializing the listener socket.

    It is less clear how the workers would compete for the active TCP sockets. The boss could use the fork trick again, but that would only work for the first active socket; once the workers are forked, their address spaces are separate from the boss's, and any further active sockets created by the boss would not be automatically available to the workers. In addition, because sockets are per-process state, the boss couldn't share its active sockets among the workers using some form of inter-process communication.

    It seems likely that, if you colleague's implementing a TCP-based server, then the boss will fork the listener socket to the workers and have the workers compete over the active sockets created by the accept call. However, this only works if the operating system supports access to the same socket endpoint via concurrent accept() requests, as Linux does. If the operating system doesn't support concurrent accept() calls on the same endpoint, the boss will also have to fork some synchronization mechanism, such as semaphores, along with the listener socket to allow the workers to synchronize among themselves over access to the accept() calls. This requires more synchronization overhead than does concurrent accept() calls (essentially twice the number of system calls), but much less overhead than repeatedly forking workers.

    A UDP-based server only has one active socket, so the only thing the boss can do is fork the workers with the socket. The same assumptions that held for the TCP listener socket hold here, except that it's more likely that the operating system will support access to a socket endpoint via concurrent reads and writes. And if the operating system doesn't, the same trick of forking synchronization mechanisms along with the socket works here too.

    Another assumption, required for UDP but not TCP, is that the servers be stateless. Because UDP is connectionless two successive requests from the same client could be handled by two different workers. Each worker is in a separate address space, so any state maintained by the worker handling the first request is unavailable to the worker handling the second request.

    Many people sorta answered this question, but not as clearly as they should have. In particular, many people wrote something about forking a TCP socket without indicating which socket - the listener or the active socket - was being forked. Also, some people answered this question under the assumption that workers were continually being forked in response to new requests, which runs against the intent of the question.


  4. Does an iterative, concurrent server use the concept of time-slicing to implement concurrency? Explain.


    An iterative, concurrent server multiplexes several open connections with a single thread. Although it depends on implementation details, an iterative, concurrent server would most likely not use time-slicing to implement concurrency.

    Time-slicing is a pre-emptive scheduling technique; a thread is yanked out of the cpu at the end of its quantum, independent of whether or not the thread is ready to leave. To implement time-slicing, a iterative, concurrent server would need, first, some way of interrupting itself after some time period and, second, a way to save and restore state at arbitrary times. Although it is possible to do both of these things, it is difficult.

    It is more likely that an iterative, concurrent server uses non-pre-emptive multitasking to provide concurrency. Each request would run until it reached a natural scheduling point, such as the start of a lengthy I-O operation or after sending the reply.

    Most of the people answering this question ignored the iterative part and gave an answer in terms of fork concurrency. Also, many people described what the operating system does to implement concurrency, while this question asked about what the (single-threaded) server does to implement concurrency.



This page last modified on 13 March 2003.