The problem with your colleague's observation is that the tunneled protocol (encrypted TCP) provides a reliable, byte-stream abstraction, so it isn't necessary for the tunneling protocol (clear-text TCP) to provide it too. If such redundancy can be provided without much cost, then it isn't a problem. However, if the overheads are high, as they are here, then the costs can quickly escalate to unreasonable and impractical levels as the two TCPs fight against each other and the network to recover from lost or excessively delayed packets.
It is better to tunnel encrypted TCP through clear-text UDP. UDP does exactly what is wanted - transport through the public network - cheaply and simply; encrypted TCP takes care of the rest. Also, UDP is just IP multiplexed with port numbers and so represents a fairly simple and natural approach to tunneling TCP.
This one surprised me - no one got it right. I thought the double reliability would have been a clear tip-off, along with the UDP-IP similarity. A couple of people got close by correctly laying out all the facts and then drawing exactly the wrong conclusion.
In a multi-protocol server, each service provided requires at least two listener sockets. For k sockets total, a multi-protocol server would be able to provide at most floor(k/2) services. In a multi-service server, each service requires one listener socket, allowing for at most k services total. Because floor(k/2) < k for most values of k, if you objective is to implement as many different services as possible, you should choose to implement a multi-service server.
Almost everybody got this more or less right. A couple of people argued for
(or seemed to argue for) a tcpmux
implementation, which wasn't one of the
choices.
shutdown()
along with close()
for
terminating a TCP connection between a client and server. You don't need to
consider shutdown()
's arcane resource-management issues at the hosts.
shutdown()
ends a TCP (or, less importantly, UDP) connection according to
the protocol specifications, which provides an orderly shutdown of the
connection. close()
just cuts off the connection without undue regard as
to what the connection is.
Orderly shutdowns allow information already in transit through the connection
to continue to the destination; an endpoint, even after shutting down a
connection, may continue to receive information. A connection that's been
closed is unavailable for any further communications of any kind. Shutting
down a connection also makes it clear to the other endpoint that the
termination is being done deliberately and not as the result of a crash, for
example. When an endpoint is just closed, it's difficult for the other endpoint
to determine if the termination was deliberate or an accident. Finally,
shutdown()
recognizes that connections are duplex by allowing individual
control over each direction; close()
deals with the connection as an
indivisible unit.
Most people got this this one too, although many with less clarity that I
wanted to see. Some people wandered into the forbidden territory by pointing
out that close()
doesn't necessarily mean close, which is true but
irrelevant to the endpoint issuing the close.
The three forms of concurrency are fork concurrency, thread concurrency, and multiplex (single thread) concurrency. Unless there is special reason not to do so, thread and multiplex concurrency will be lumped together and considered as thread concurrency.
The obvious first. The server makes use of fork concurrency to run the CGI programs. The benefit to doing this is standards compliance; the idea behind CGI programs is to separate host specific details in programs independent of the HTTP server. Without fork concurrency, it would be necessary to incorporate those programs into the server, increasing complexity and decreasing portability. The drawback is are usual ones with fork concurrency: high overhead and limited inter-process communication.
There are a couple of advantages to using thread concurrency in the server to manage CGI programs. First, it decouples the server from the CGI programming, letting the server handle other requests while the CGI program is working; this can be valuable when the CGI program takes a long time to execute. Second, the other requests may include other CGI program executions, which increases server responsiveness via parallelism (it also introduces the possibility of race conditions when more than one instance of the same CGI program is running at the same time). The disadvantage is increased complexity in the server; however, the complexity is more of a problem for multiplex concurrency than it is for thread concurrency because multiple threads implicitly multiplex, while a single thread, and the server, must explicitly multiplex.
The advantages and drawbacks of concurrency in the CGI program depend heavily on the nature of CGI program using concurrency. In general, however, CGI programs would use concurrency - in any form - to simplify design and improve performance at the cost of increased implementation effort and decreased robustness (due to potential synchronization and timing errors).
People who drew on the motivation behind CGI did better on this question than people who didn't, but otherwise this question offered up little difficulty to most people.
This page last modified on 3 May 2001.