CS 537, Client-Server Interfaces

Spring 2002 - Test 3


  1. Suppose that service s1 is offered over TCP and service s2 is offered over UDP, both in a client-server architecture. Is s1 or s2 the more likely candidate for a multi-protocol server implementation, or is there no difference between s1 and s2 with respect to a multi-protocol server implementation? Explain.


    If a service is offered over UDP, then it must use datagrams because that's what UDP provides. Implementing the same service over TCP requires some work to implement datagrams over a stream, but it's relatively simple and inexpensive work. Extending a UDP service in a multi-protocol server should not be that difficult.

    If a service is offered over TCP, then it may use streams or it may use datagrams. In any event, it's a reliable service. Implementing a TCP-based service over UDP could involve lots of hard work to provide a comparable level or reliability in UDP, even if the service uses datagrams; a stream-based service would require even more work.

    To take an example, imagine implementing a datagram based service such as NFS over TCP. Although there are difficulties involved, most are not a result of the change from TCP to UDP (assuming we ignore performance issues); in fact the change would probably result in several simplifications. Now imagine implementing a stream based service such as FTP over UDP. This is essentially re-implementing TCP with UDP in the role of IP.

    A lot of people focused on the wrong part of implementing a service in a server. It's true that you have to add another listener socket to support both UDP and TCP, and that adding a new listener isn't a big deal. From that point of view there isn't much difference between s1 and s2. But, once you've added the second listener, you also have to add something that deals with the requests that come in over the listener, and that's where the difference between s1 and s2 starts to show.


  2. Explain how fork() and some form of exec() can improve the structure of multi-service servers.


    The main structural problem with multi-service servers is the code bloat that results from loading the server with all the code it needs to implement the many services it offers. Even if the potential implementation problems are dealt with via careful structuring and design, the resulting executable is still huge, leading to performance problems.

    fork() and exec() allows each service to be split off from the server and implemented in its own executable. The server handles each service request by forking a copy of itself and then calling exec() to replace itself with a copy of the proper service executable. This greatly simplifies the design and implementation of the server and reduces performance problems (although fork() and exec() could potentially extract a sever performance cost on their own).

    Most people agreed that fork() and exec() could help a multi-service server, but some people were a little fuzzy on how exactly they help.


  3. Briefly describe the steps that would be required when using application-level tunneling to get an application using UDP working on a system that doesn't support the Internet Protocol.


    The first, and main, step is to create a library that tunnels the UDP packets through the host network subsystem. The library would present a UDP interface to the application and work the host network subsystem in a way that supports UDP semantics. Putting the library on both the local and remote hosts would allow the applicatoin to run on the IP-less hosts. The library can be in user space or in the kernel.

    The second step would be to implement the UDP/IP features not supported by the host network. If the host network is a transport network, the second step would involve mostely address resolution between IP and the local-network addressing schemes. If the host network isn't a tramsport network, then considerably more work is involved, most of it dealing with routing.

    A few people assumed, for some reason, that the host system supported UDP, despite what the problem said. A few more did the same thing more indirectly by assuming that SMTP was available, which is an IP protocol.


  4. What is the second-hop problem? Explain how libraries can avoid it.


    The second-hop problem refers to the doubled bandwidth use that arises when application-level gateways are used to translate between two different services on a network. Each request appears twice on the network, once going from the client to the gateway for translation, and the second time going from the gateway to the server (the reply causes a similar problem).

    Libraries can solve the second-hop problem by incorporating the gateway translation code into the client itself. A gateway library would translate the request in the client before it gets to the network, and reverse the transformation on the reply once it leaves the network.

    A gateway library works best when it's decentralized; that is, when the gateway doesn't require global coordination. A firewall is an example of a gateway requiring global coordination in the form of pass-reject rule tables and various types of connection accounting.

    A number of people treated the application gateway as a transport gateway by positioning it between two networks. But, if the gateway is between two networks, there is no two-hop problem because the packet only appears on each network once in going from and coming back to the client.



This page last modified on 3 May 2002.