Computer Networking

Homework 5, 4 April 2013


This homework assignment has five questions; answer all of them. This assignment is due no later than 1:00 p.m. on Monday, 15 April.

If you mail in your assignment, please submit a printable document — a PostScript .ps or PDF .pdf document, for example — and not a source document — a Word .docx or Latex .tex document, for example.

  1. A mobile TCP receiver is receiving data from its TCP sender. Will the round-trip time and round-trip time-out evolve when the receiver gets farther away from the receiver and then nearer? Assume the moving speed is fast enough so that the propagation delay ranges from 100 to 300 msec within 1 sec.


    As the propagation delay changes, both the round-trip time (RTT) and the round-trip time-out (RTO) changes in proportion. Increasing propagation delay causes packets to spend more time going end-to-end, increasing the RTT. The RTO is a combination of the RTT and variation in the RTT, although because RTT changes are fast, the contribution from RTT variation will be small. Similar arguments hold when the RTT decreases.

  2. Given that the throughput of a TCP connection is inversely proportional to its RTT, connections with heterogeneous RTTs sharing the same queue will get different bandwidth shares. What will be the eventual proportion of the bandwidth sharing among three connections if their propagation delays are 10ms, 100 ms, and 150 ms, and the service rate of the shared queue is 200 kbs? Assume that the queue size is infinite without buffer overflow (no packet loss), and the maximum window of the TCP sender is 20 packets, with each packet having 1500 bytes.


    Assuming steady-state data transfer (that is, no start-up or shut-down transients, no interruptions due to time-out and retransmissions, and rate-matched end-points), sliding-window flow control essentially turns into rate-based flow control, with sender i dropping a packet into the network every 2ρi msec, ρi the propagation delay for sender i.

    With ρ = 150 msec, a sender transmits a packet every 300 msec; the sender transmits two packets every 600 msec. A sender with ρ = 100 transmits three packets every 600 msec. A sender with ρ = 10 msec transmits 30 packets every 600 msec.

    Of the 35 packets entering the router queue every 600 msec, 86% (30 of 35) belong to the fast sender, 6% (2 of 35) belong to the slow sender, and 8% (3 of 35) belong to the third sender.
  3. The RTP incorporates a sequence number field in addition to the time-stamp field. Can RTP be designed to eliminate the sequence number field and use the time-stamp field to resequence the out-of-order received packets? Justify your answer.


    Probably not. Replacing sequence numbers with time-stamps requires the fairly strong assumption that each RTP packet represent the same-sized temporal slice of the media stream (125 msec of speech, for example). This is expensive because the media protocol wastes bandwidth encoding less useful information (125 msec of silence, for example), and it fixes packet size for the life of the stream (a problem for multicast streams).

    Variable bit-rate encodings are more economical, but make it difficult or impossible to interpret time-stamp gaps.

    The problem also exists in the opposite direction: if a data unit is too large (an uncompressed video frame, for example), RTP fragments it into several packets, all having the same time-stamp associated with the data unit. This makes it difficult or perhaps impossible to order the fragments without sequence numbers.
  4. When adding a per-segment checksum to a segment, TCP and UDP include some fields in the IP layer before the segment has been passed to its underlying layer, the IP layer. How could TCP and UDP know the values in the IP header?


    TCP and UDP (and other protocols) violate layering by reaching down into the network (or IP) layer for a pseudo-header, which contains the information needed to compute the checksums.

  5. Host A wants to simultaneously send messages to hosts B and C. A, B and C are connected by a broadcast channel—a packet sent by A is carried by the channel to both B and C. The broadcast channel connecting A, B, and C can independently lose or corrupt messages to each destination. For example, a message sent from A might be correctly received by B, but not by C. Design a stop-and-wait-like error control protocol for reliably transferring a packet from A to B and C such that A will not accept the next payload from the upper layer until both B and C have correctly received the current payload. Describe the packet formats used.


    One approach is to modify stop-and-wait slightly so it sends to a list of receivers rather than one receiver. The sender works through the list, running the usual stop-and-wait protocol for each receiver, moving to the next receiver after successfully sending to the previous receiver. This change requires modifications only at the sender.

    A second, more simultaneous, approach collapses the n-round scheme used in the first approach into one round by sending messages to all receivers on the list at once, and then collecting the expected acks. This approach also requires changes only at the sender.

    The second approach still sends n messages when there are n receivers. A third, mulicast, approach sends one message for all receivers (assuming no errors). This approach requires significant changes at the sender and receiver, mainly to include mulicast addressing. The packet format would also have to change to include a sequence number so successful receivers could distinguish retransmissions of old packets from new packets.

This page last modified on 2013 April 21.