CS 537, Client-Server Interfaces

Spring 2003 - Test 3


  1. I recently received the following spam:

    Monitor the Web from Your E-Mail Box
    
    TracerPage monitors a list of Web pages that you submit and will mail you
    copies of the pages whenever they change.  Monitor the Web from your e-mail box
    with TracerPage: http://www.tracerpage.com/tp.html
    

    Explain how both application-level gateways and tunneling might be used to implement this service's advertised features.

    Note: when answering this question, do not use Comer and Steven's definition of tunneling; use instead the definition given in class and in the lecture notes.

    Extra credit (+5 points maximum, 100 total points maximum): Answer this question using Comer and Steven's definitions of encapsulation and tunneling. Clearly mark the extra-credit part of you answer, should you choose to answer.


    Application gateways are servers that span two or more different networks, or, more generally, transport mechanisms. In this case, the two transport mechanisms are HTTP (web pages) and SNMP (e-mail). The service uses HTTP to read pages from Web servers and sends them to its clients using e-mail via the SMTP.

    Tunneling is a technique for implementing encapsulation, and involves including packets from one network as payloads in packets from another network. The TracerPage service uses tunneling when it puts Web pages in the body of e-mail messages.

    Extra credit answer: According to Comer and Stevens, encapsulation involves hardware framing, while tunneling uses higher-level transport via datagrams.

    I though this would be an easy question, but many people did less well then they should have. Most people got close to a correct answer for tunneling, but the gateway answers were all over the place. It may be that the e-mail-ftp gateway example in Comer and Stevens confused some people, but most gateway answers wandered off into libraries and operating systems, which are important, but not essential to answering this question.


  2. Explain the usefulness of preallocation in implementing multi-protocol servers. Explain the usefulness of preallocation in implementing multi-service servers. Explain which of the two server types (multi-protocol or multi-service) benefits more from preallocation.


    Preallocation is a technique of implementing faster concurrency by creating extra computations (threads or processes) and keeping them idle until they're needed, reducing concurrency start-up costs.

    A multi-protocol server makes one service available via two or more protocols. The multiplicity occurs at the server interface, where there are a number of different protocols feeding one server.

    Preallocation in a concurrent, multi-protocol server would most likely take one of two forms. First, if the server multiplexes one computation to read the interface, it could allocate one computation per protocol interface. Similarly, if the server relies on multiplexing the back-end code over the received requests, it could create a computation per protocol to handle requests from that protocol only.

    A multi-service server makes two or more service available via a single protocol. The multiplicity occurs at the back end, where code implements the offered services.

    Preallocation in a concurrent, multi-service server would most likely involve preallocating computations per protocol to handle requests from that protocol only.

    Of the two server times, multi-service servers stand to gain the most from allocation.

    Several answers for this question didn't even bother to mention preallocation, and others mentioned preallocation without indicating what it was that was being preallocated. Many answers mentioned preallocating sockets or port numbers, which is somewhat correct, but the answer indicates the preallocation of interest is optional, which it is not when allocating sockets.


  3. One way of distinguishing between latency and delay in the network is by noting that latency is constant but delay is changeable. For example, by using two connections, delay can be cut in half but latency remains unchanged.

    Explain how you can make the opposite argument at a server. That is, explain how, at a server, you can consider delay to be constant and latency to be changeable. Don't forget to explain what latency and delay are at the server.


    Latency at the server is the amount of time it takes to for a request to make it into the server; delay is the amount of time it takes for the reply to make it out of the server. Latency is a function of the arrival and service rates; that is, of the queue length. Delay is a function of the operations performed by the server in handling the request.

    The easiest way to reduce latency at a server is to increase concurrency, allowing several requests to be in progress at the same time; this effectively increases the rate at which the requests leave the queue and enter the server, but does nothing about the amount of time the requests spend in the server.

    Reducing delay is harder, and essentially requires revamping the server so it handles requests faster (analogous to reducing network latency by using a faster network).

    Despite the frequent mentions of latency and delay at the server, many answers involved the server and either the network or a client (or both). Most answers got across the idea that concurrency decreases one thing but not the other, but they were less clear on why the other thing didn't change, and they were slackful in defining what one thing and the other were.


  4. The Network File System (NFS) uses the Mount Protocol to help it implement NFS services. Pick any one of the three peer-to-peer systems discussed in class and explain how that peer-to-peer system implements the services that the Mount Protocol provides for NFS.


    The Mount Protocol lets a host build a local file system by patching together pieces of network-resident file systems. A peer-to-peer system can be though of as a global, distributed file system. Mounting, in the NFS sense, most closely corresponds to a new peer joining the network of existing peers and adding its resources to the global resource collection.

    Note that peer-to-peer mounting is backwards from NFS mounting: an NFS mount takes global resources (file systems in the network) and makes them local (or at least locally accessible), while a peer-to-peer mount takes local resources and makes them global.

    A number of answers assumed the question asked how to use the Mount Protocol with the selected peer-to-peer system, rather than describing how the system implements mount services. A few answers just ignored the question entirely and described how the Mount Protocol works.



This page last modified on 10 May 2003.