Computer Networking • CS 514

Test 2, 19 February 2014


This test has six questions. The test runs the entire period, ending at 6:20. When you finish the test, you may leave.

Always justify answers unless the question explicitly states it’s not necessary. Some questions require reasonable assumptions in order to be answered. In such cases the answer should clearly state the assumptions made.
  1. Briefly (one or two sentences, 25 to 30 words) define:

    • Congestion control

    • Well-known port numbers

    • Connection-oriented

    • Maximum segment size

    • Receive window


    • Congestion control:
    A technique that helps a protocol react to congestion in the network, usually by throttling the transmission rate.

    • Well-known port numbers:
    Port number managed by a central authority (the Intenet Assigned Number Authority, IANA). In theory, only assigned applications should use well-known port numbers.

    • Connection-oriented:
    A protocol is connection-oriented if it provides a service that behaves like a connection, but doesn’t use underlying network resources to implement the behavior.

    • Maximum segment size:
    A connection’s maximum segment size (MSS) is the size of the largest data chunk that can pass through the underlying networks without being rejected (too big) or changed (fragmented).

    • Receive window:
    A TCP sender value indicating the space available in the receiver’s buffer. The receiver sets the receive-window size, although it may never receive that many unacknowledged segments if the congestion window is smaller than the receive window.

  2. Given a protocol using selective repeat, explain why the receiver may have to acknowledge PDUs outside its receive window.


    Selective repeat may require the receiver acknowledge PDUs received outside the window to remove ambiguity at the receiver about which PDUs are being received. If acks get lost, the receiver’s buffer may advance past the associated PDU. When the PDU is retransmitted and received, the receiver should re-ack the PDU even though it’s already been received so that when the receiver buffer again expects a PDU with the same sequence number, the sender responds with a new PDU and not a retransmission. See Kurose and Ross, page 237 (5th ed.) for details.

  3. A and B have established a TCP connection and A is sending a steady stream of data to B (“steady” — A's observed bandwidth on the TCP connection is relatively constant over time. “relatively constant” — small variance).

    Suddenly, the network bandwidth available to A and B drops by 90% (for example, drops from 100 kbyte/sec to 10 kbyte/sec). Describe the behavior of A's end of the TCP connection after the bandwidth drops.


    As the bandwidth drops, data and ack PDUs get dropped, triggering congestion control at the sender. The congestion window shrinks back to one, the threshold is halved, and slow-start begins again. Because of the drastic bandwidth loss, the threshold is probably still too high, and more PDUs get dropped, triggering further congestion control. Eventually the threshold is reduced enough so that PDUs are lost during the linear rather than the exponential increase.

  4. Which is a better multiplexer-demultiplexer: a taxi cab or the subway?


    Subways are better multiplexers. If several people want to go uptown, only the one who manages to flag a cab will be able to do so; the other people will have to flag their own cabs. If a cab is out of service (has already been flagged), none of the people will be able to flag it.

    On the other hand, if the people want to go uptown and wait for the subway, once it comes they all will be able to go at the same time (assuming there’s room, which has nothing to do with multiplexing).

    Note that you could make the same argument about taxis if you allow taxi sharing.
  5. A is sending 100 kbytes of data to B over a TCP connection; each TCP PDU contains 1 kbyte of data. If there is a 10% probability of dropping a PDU in transit, how many PDUs are sent in total for the data transfer (that is, ignore connection set-up and tear-down)? Show your work.


    To keep things simple, assume no fast retransmit on duplicate acks. Also assume lost PDUs are evenly distributed among the sent PDUs, and not clumped together.

    Consider the sender: it sends 100 PDUs, and 10 of them get dropped before being received. The 10 dropped packets are retransmitted, and one of them gets dropped before being received. The doubly-dropped PDU gets retransmitted, and assume it gets received. That’s 111 PDUs from the sender.

    Now consider the receiver. It sends 90 acks, and 9 of them get dropped. The sender retransmits the associated PDUs, and one of them gets dropped. The receiver gets the 9 retransmitted PDUs, and sends 9 acks. Assume one of the 9 gets dropped. The second retransmission gets received, as above, and assume the re-ack makes it to the other end (with probability 0.9).

    Because lost PDUs are not clumped together, collective acknowledgment isn’t involved (let us assume), and each received ack applies to the most recently sent PDU. In that case, the total PDUs sent are 100 + 10 + 1 + 90 + 9 + 1 = 211.

  6. Suppose TCP is operating over a network that is asymmetric; that is, the time to transmit PDUs in one direction differs greatly than the time to transmit from the other direction (for example, the delay from host A to host B is 10 ms but the delay from B to A is 100 ms).

    How would you modify TCP to adjust for the network asymmetry? Note that a good answer to this question has to make several assumptions, which should be clearly stated in the answer.


    Answer 1: Don’t change anything. The round-trip-time calculation is tuned to measure conditions in both directions, and adjusts the transmission rate (in PDUs/sec) to overall conditions. The slower of the two network speeds sets the transmission rate, and there isn’t much that can be done to push transmission rates beyond the slowest link.

    Answer 2: The end-point sending into the slower network speed (TCP is full-duplex, so one end-point must be in that condition) can observe that the other end-point is only using a small fraction of its receive buffer. It can make this observation ether from the receive-window sizes in the other end-point’s acks, or by observing that the congestion window is as large as the receive window size. In the second case, the slow end-point could increase the receive-window size along with the congestion window. This lets the slow end-point put more PDUs on the network than would otherwise be possible. This adjustment only works if the slow network is not a consequence of congestion, otherwise the congestion window will be the limiting factor at the slow end-point.


This page last modified on 2014 February 24.