Computer Networking

Test 1, 2 October 2012


This test has six questions; answer at least five of them.

Up to the first 75 words appearing after a question are taken to be the answer to the question. Words following the 75th word are not part of the answer and will be ignored.

  1. Either public-key cryptography or symmetric cryptography can be used for authentication. Considering only logistics or mechanics, and not worrying about how they can be attacked, which of the two cyphers is the more practical to use for authentication? Justify your answer.


    Authenticating via public-key cryptography is more practical than is authenticating via symmetric-key cryptography. The secret used to identify an individual in public-key cryptography remains secret during authentication. To authenticate using symmetric-key cryptography requires distributing the secret to all those requesting authentication.

    The answers, approximately verbatim, and in no particular order:

    1. Public-key cryptography is more practical to use for quthentication. It is more practical because the public key uses a different key for encryption and dycryption1 whereas symmetric key uses the same key for encryption and decryption, making it easier to crack.

      1: True, but what does that have to do with authentication?

    2. [ not answered ]
    3. Between both public key cryptography and symmertic cryptography, symmetric cryptography is the more practical choice for authentication. The reasons for this are that public-key cryptography allows for anyone to find their key to the other person, so there is no way for good authentication while symmetric keys only allow the participant of the data exchange to authenticate.2

      2: But couldn't public-key cryptography use whatever mechanism you find practical for symmetric-key cryptography?

    4. Public key cryptography is practical and can be used for digital signatures. If a message is encrypted with a private key and can be decrypted when its paired public key, then only the owner of the private key could have done it.3 This technique is not generally used in bulk data streams, but for authentication techniques this is widely used.

      3: How does this make it more practical than private-key cryptography?

    5. Symmetric cryptography is more practical to use for authentication since key used for encryption & decryption is known to only who does them (assumption).4 A bit more secure than public-key cryptography. Easy to implement & efficient.

      4: Ok, but is it practical to spray around secret keys to everybody who wants authentication?

    6. Public key cryptography ("crypto") is less private than symmetric key crypto. The keys are available to the world.5 Symmetric key crypto (e.g., Caesar cyphers) use simple computations and substitutions (the mechanics) to encrypt the data. Since authentication requires heavy security (typically), it should be as private as possible.6

      5: And does that make authentication harder or easier?

      6: What's the answer?

    7. First symmetric cryptography based on using one key to encrypt and decrypt data while public-key based on asking for senders or receiver's public key. While using public-key cryptography, the key can be used by unknown and the receiver can't be sure if the message is coming from the real sender or from an attacker.7 Data encrpyted with symmetric encryption method can be broken if the key that used in encryption was attacked or broken.8

      7: But wouldn't public-key cryptography solve that problem too?

      8: Is there an answer here?

  2. An application protocol is self-describing if it includes information about the data being sent along with the data being sent so that the receiving side can understand the data received. Describe the information a self-describing protocol can include in the data stream assuming the data is a sequence of integers of the same kind. Clearly state your assumptions.


    Assume architectural variation at the end-points, integer lists of arbitrary length, and reliable byte-stream transmission. The self-describing protocol establishes 1) the integer size, 2) the integer endian-ness, and 3) the length of the list using the following structure: 1) a byte giving the number of bytes per integer, 2) an integer of the form 0x00010203 for how many bytes make up an integer, 3) an integer giving the number of integers in the list, and 4) the integers.

    The answers, approximately verbatim, and in no particular order:

    1. [ not answered ]
    2. [ not answered ]
    3. A self-describing protocol can include references to the types of messages sent (e.g., request and response messages). It can include the syntax of the message types (fields that are present and how they're delineated). It can include the semantics of the information in the fields. It may also contain rules for operation such as criteria for when/how a process sends + responds to messages.1

      1: Yes, it can do all those things, but what should it do for this particular problem about integer lists?

    4. Information that can be included in a sequence of intergers contain encryption or caching type if it is used2 and the sender and receiver's information and protocol type that used.

      2: What is the caching type? And why is it important to know what caching type is being used?

    5. For an application protocol to be self-describing, some extra the data it can include in the pakcets of data so that the receiving side can understand the data are the packet number, so that the receiver knows what packet holds which information, and how many packets there are.3

      3: How can we be sure that's even possible? Does such information exist?

    6. [ not answered ]
    7. In this case, protocol can include data type (like integer), description of the data being sent (meaning & representation) of the sequence of integers so that receiver can understand the data.4

      4: But how does the protocol do this? And what in particular does it do?

  3. When computing the retransmission time-out (RTO), what should TCP do about packets that time out? Justify your answer.


    The RTO is calculated from measured round-trip times. Packets that time out don't have a measured round-trip time because there is no ACK, and should be ignored.

    The answers, approximately verbatim, and in no particular order:

    1. Packet might be timed out for several reasons. When a packet timed out that means the packet never reach to the destination. That packet should be resent again.1

      1: What about the RTO?

    2. With TCP and RTO, when the packets time out, TCP should resend the packet again.2

      2: And what about the RTO?

    3. When a segment enters into a TCP connection, then the timer starts. If the timer expires before the host receives an acknowegment for the data in the segment, then the host retransmits the segment. Timeout should be larger than the round-trip time, but not much larger. TCP retransmits the data if the packets timeout.3

      3: And what does it do for the RTO?

    4. TCP retransmits the packets that time out in order to provide reliable data transfer since it's a service of TCP. If TCP does not retransmit time-out packets that leads to loss of those packets & disorder of packet.4

      4: What about the RTO? What's the answer to the question?

    5. TCP should retransmit packets that time-out.5 If a packet times out before an acknowledgment is shown the sender should resend [?] although it is quite possible the receiver received it and just has not acknowledged it. It is equally possible that the packet exploded in the network or got lost. To ensure the receiver received it the timed-out packet should be retransmitted. This system works on a timer.

      5: Is this what the question asked about?

    6. During transmission process the data segments and the acknowledgments can get lost. The TCP handles this by setting a timeout when it sends the data and if the data is not acknowledged and when the timeout expires, it retransmits the data. A retransmission timeout ??? when the sender packets is missing too many acknowledgments and decides to take a time-out and stops sending.6 After some time the sender sends a packet and again another packet, i.e. [ picture ].

      6: What? How are lost packet retransmissions handled?

    7. If a packet times out, TCP should retransmit the packet (the "assumption" being that it was lost).7 It should also reevaluate the RTO and adjust it acordingly (perhaps it needs to wait longer before it retransmits).8 Naturally, the timer itself should be reset as well.

      7: Is this what the question asked about?

      8: How? Are such adjustments possible?

  4. Describe how TCP connection set-up could be optimized if there was only a one-way data flow from the initiator to the receiver, as might be the case if the receiver is a logging service.


    A receiver does not need to synchronize with the sender on one-way data flow, which reduces the three-way connection establishment handshake to a two-way handshake. Because the receiver doesn't establish a connection with the sender, it also doesn't have to close its half of the connection, reducing the four-way connection shutdown handshake to a two-way handshake.

    The answers, approximately verbatim, and in no particular order:

    1. The TCP connection set-up like this i.e., if there was only one way data flow then this communication can result in [??] [??] traffic and can [??] [??] when there is insufficient bandwidth.
    2. The TCP connection set up could be optimized by the one-way data flow which would allow for packets to be sent faster, since the one-way flow wouldn't need acknowledgments from the receiver.1

      1: Why not? Does the transport become suddenly reliable if data flow is only one-way?

    3. In this case, TCP connection set-up could be optimized because 3-way handshake is reduced (eliminiated) to [?]-way handshake. Since there is only one-way data flow from initiator to the receiver, so initiator won't send a request for connection because it knows that it won't get a response. So there will be no overhead for connection before sending & receiving actual data packets.
    4. The TCP client side sends a TCP segment to the server side.
    5. TCP connection set-up can be optimized using the three-way handshake. Assuming there is a one-way data flow, the initiator could send a connection request to the receiver. Upon receipt of this message, the receiver could acknowledge the request by sending a response message. When the initial initiator receives the acknowledgment, it can then send a message to the receiver confirming the acknowledgment (this message may also carry data, but it does not have to). The connection is now established.2

      2: How is that an optimization over what TCP usually does?

    6. By employing TCP data transport as TCP packet flows can gain the security value of unidirectional transfer.3 The 3-way handshake can be reduced to 2-way handshake which increases the speed.4

      3: What is “security value”?

      4: Is it possible to do that? If so, how would that work?

    7. A TCP connection set-up could be optimized if there was only a one-way data flow from the initiator to the receiver by changing to simplex.5 TCP is a full-duplex two way (both at once) which might slow down the connection. A simplex is just one-way which might help connection be faster.

      5: Is that possible? How?

  5. In rate-based flow control the two end-points negotiate a rate - in bytes/sec, for example - at which data is sent over the connection in either direction. Rate-based flow control has the advantage of minimizing acknowledgment overhead. Give an example of a service which can use rate-based flow control without acknowledgments. Give an example of a service that requires acknowledgments with rate-based flow control, and describe the kind of acknowledgments that would produce the least overhead. Justify your answers.


    A streaming multimedia service can exploit rate-based flow control without acknowledgments. The data stream is sufficiently rich that (small) missing sections can be ignored or compensated for at the receiver. FTP file transfer is an example service that requires acknowledgments with rate-based flow control. Missing portions of the file stream can, in general, be recovered only by retransmission. Selective acknowledgements would provide the least overhead at the sender, while negative acknowledgments would provide the least overhead in the network.

    The answers, approximately verbatim, and in no particular order:

    1. [ not answered ]
    2. Reliable data transfer (with rate-based flow control) over a perfectly reliable channel is an example of a service which can use rate-based flow control without acknowledgments.1 Reliable data transfer (with rate-based flow control) over a channel with bit-errors or over a lossy channel with bit errors) is an example of a service that requires acknowledgments. Negative acknowledgements would produce the least overhead in case 2) since channel has just less chance of corrupting the packets so in this case acknowledgmets (NACKs) for lost/corrupted packets will be less than acknowledgments for received packets (ACKs).

      1: True, but it's true for any flow-control mechanism.

    3. A service that can use rate-based flow control without acknowledgement is when a low-end machine requests information (such as website) from a higher computer. The low-end machine can just load slowly, and the rate-based control will limit the rate which sender transmits data.2 A service that requires acknowlegments with rate-based flow control is sending messages. A rate based flow control will limit the rate but it needs acknowledgment letting it know when more information can be sent.3 Negative acknowledgments have best performance because they only send acks when needed.

      2: But what about the missing acknowledgments? They won't make a difference?

      3: Is that how acknowledgments are used?

    4. An example of a service that uses rate-based flow control with acknowledgments would be TCP.4 The acknowledgments that would produce the least overhead would be how the packets hold the info of the rate of the packets.5

      4: TCP uses rate-based flow control?

      5: What does this mean? And which acknowledgment scheme does whatever it is you're suggesting?

    5. [ not answered ]
    6. A service that can use rate-based flow control without acks like services use streaming.
    7. TCP can use a receive window to employ rate-based flow control without using ACKS.6 The receive window represents available space in the receiver's buffer, and it is dynamic. The sender cannot overload the receive window.7 Systems using acks vary in terms of optimal choices. - Systems using satellites might employ cummulative acks because they handle high delay the best.8 - Systems using wifi might employ selective acks due to high chance of errors. - Systems using wired connections might employ negative acks due to low chance of errors.9

      6: If there aren't any ACKs, why have a receive window at all?

      7: Is there a service describe here?

      8: But what about wasteful retransmissions?

      9: Do such system exist?

  6. Two processes are connected by a 1,000 mile link and are using sliding-window flow control. The processes use a window size of 1,000 and 1,250 byte packets, and link utilization is 100%. How fast is the link in bits/sec? Justify your answer.


    Assume that ACKs are much smaller than data packets (4 vs 1,250 bytes, say), and can be ignored (or, if you prefer, assume packets are 1,250 + 4 bytes long). With 100% utilization and window size 1,000, there are 1,250,000 bytes in transit at any time. It takes a bit (1,000 miles)/(186,000 miles/sec) = 0.005 sec to traverse the link. The link's capacity is (1,250,000 bytes)(8 bits/byte)/(0.005 sec) = 2 gbits/sec.

    The answers, approximately verbatim, and in no particular order:

    1. Given window size N = 1,000, \ packet size = 1,250 bytes \ link length = 1,000 mile \ link utilization = 100% It means whole link length is used; means there will be 1000 packets in link at one time. Size of 100 packets = 1000 x 1250 bytes = 1000 x 1250 x 8 bits = 10,000,000 bits Therefore link speed = 10,000,000 bits/sec1

      1: How did you go from bits to bits/sec?

    2. [ not answered ]
    3. The link speed is determined by the size of the window at the receiver and the sender. The window size at the receiver side differs from the one at the sender.2 If the link utilizaion is 100%, then the window size is being filled immediately.3

      2: Why is the receiver important?

      3: And so what's the link speed?

    4. [ not answered ]
    5. [ not answered ]
    6. 1,000 window size/1,250 byte packets 0.1250 bits/second4

      4: How did you go from packets/byte in the previous value to bits/sec in this one?

    7. Not enough information. Need to know how fast a packet can cover 1000 miles with no congestion5 and how many packets are in the window.6

      5: You should know this, but you can make something up if you don't.

      6: The problem told you that, along with 100% utilization.


This page last modified on 2012 October 15.