CS 535, Telecommunications

Quiz 3, 20 October 2004


  1. Does the SDLC error detection-correction property consistent with or contrary to end-to-end property? Explain your answer.


    Consistent: The SDLC protocol is designed to work over a range of links, including those with high delay and high error rates, such as satellite links. Given the potential cost of end-to-end recovery over such links, adding hop-by-hop error detection or correction can be made consistent with the end-to-end principle, given suitable economic justifications for end-to-end vs. hop-by-hop costs.

    Contrary: The SDLC Block Code is computed over the whole of the proceeding part of the frame (excluding the flag), which includes the payload. This applies error detection or correction to a portion of the frame that may not need it (voice or video data, for example) or may have already taken steps to provide its own error detection or correction. This unhelpful duplication of effort is exactly what the end-to-end principle argues against.

    Irrelevant: Actually, this is a trick question. The end-to-end principle applies when there are intermediate routing or switching nodes in the network; the end-to-end principle helps determine what functions those intermediate nodes should perform. SDLC is a link-level protocol, and runs station-to-station over a single link; there are no intermediate nodes.

    One answer didn't fall for the trick; a few answers appeared to be aware of end-points vs. intermediate nodes, but didn't draw the distinction too clearly. Most of the other answers were evenly divided consistent and contrary, although no answers noted that the issue arises because the block code covers the payload; if it didn't, SDLC would be consistent with the end-to-end principle (subject to the economic arguments made in the consistent answer above).


  2. Explain why a good round-trip-time isn't enough to help with congestion control, and how adding a variance estimate can better deal with congestion control.


    Congestion is an exponential function of network load, which means that small changes in network load can result in exponential in congestion levels. A running average of round-trip times (rtt) is essentially linear in response to changes in the rtt, which is too little too late. Adding a variance estimate provides a history of the changes in rtt, which can better reflect the presence of congestion in terms of an arbitrarily large change in rtt.

    Many answers asserted that a good rtt was not enough, but then forgot to justify that assertion by noting the linear vs. exponential argument. Those same answers also asserted that adding variance to the calculation would be helpful, but didn't identify the mechanism by which it helps.

    Many answers tried to base an argument on the size of the rtt; whether it was large or small. This isn't a particularly successful strategy because the size of the rtt is irrelevant; it's the accuracy that's important.



This page last modified on 14 November 2004.