Computer Networking

Test 2, 20 November 2012


There are six questions; answer at least five of them. You have one hour, from 6 to 7.

Up to the first 75 words appearing after a question are taken to be the answer to the question. Words following the 75th word are not part of the answer and will be ignored.

  1. Using decimal numbers and not binary polynomials, explain how cyclic redundancy checks (CRCs) work.


    To send decimal numbers with error detection (correction not needed), pick positive integer divisor G producing k-digit remainders, k = ⌈log10 G⌉. Take message N, multiply by 10k to get N′. Divide N′ by G to get remainder r and subtract r from N′ to get N′′, a number evenly divisible by G. Send N′′. Divide received number m by G. A non-zero remainder means the message was corrupted during transmission.

    The answers, approximately verbatim, and in no particular order:

    1. CRC is an error detection technique is little complex and it's based on computer hardware not software as checksumming. It works by doing polynomial arithmetic operation on bit string.1

      1: Ok, but what do the operations look like? What purpose do the achieve?

    2. With decimal numbers and cyclic redundancy checks, the CRC sends a line of code, starting and ending with 0s.2 If the code is 011111, it must have a 0 after the 1,3 if it has a 1, the next number tells there was an error, if it is a 1 it is an error, but if it's a 0 it [?] just [?] and of frame.

      2: How many 0s? How is the number determined? What purpose do the 0s serve?

      3: Which 1? There's a lot of them.

    3. The sender & receiver agree on a bit pattern known as a generator. The sender will choose additional bit and add them to the data which should be exactly divisible by the generator.4

      4: How does the sender do this? And after the sender does this, what happens then?

    4. CRCs are error detectors that check the code for problems. Typically, they are done in binary by adding a set number of bits to the code such that it is divisible by the pattern + 1 bits. The check B done using mod-2 arithmetic. Using decimal numbers provides the same result when done in mod-10.5 If there is a remainder, an error has occurred 10 divided by 3 returns a remainder of 1. There has been an error.6

      5: Mod-10 arithmetic? How does that work?

      6: Why? What has gone wrong to produce this error?

    5. By selecting C(x)7:

      1 If all single bit errors when the xk8 and x0 terms have coefficient 1.9

      2 All double bit errors when C(x) contains a factor.

      3 Any odd number of errors when C(x) contains the factor x + 1.

      4 Any error burst of length less than k bits.

      7: What is C(x)?

      8: What is k?

      9: What does this mean? How does it work?

    6. CRCs are polynomial based error detection, but when the decimal numbers are used they make almost the same way but they return a 32-bit unsigned[?] value. It works by replacing all the [?] beyond the two decimal places with nines which will help the cyclic redundancy checks to work without the help of binary polynomials.10

      10: How do the nines help? And how does the cyclic redundancy check work?

    7. Cyclic redundancy check using decimal numbers works by converting decimal numbers into binary values.11

      11: And then what happens?

    8. In decimal numbers, the receiving and[?] it divides by G.

      Then we get a remainder, R. By subtract R from received frame[?] we get T. If G divides T, then there is no error. If G doesn't divide T, then there is error.12

      Suppose received number is 210,278 and G is 10,941. By dividing 10,941 210,278, R → 2399. 210,278 - 2399 = 207,379 is divisible by 10,941, then no error.

      12: Is the subtraction done at the sending end or the receiving end?

  2. Can the exposed host problem can occur on a wired bus LAN? Justify your answer.


    No. Adjacent senders collide immediately, and there's no appreciable attenuation with distance. Once a packet collides it's gone, and receivers, no matter where they're located, will unsuccessfully receive the result.

    The answers, approximately verbatim, and in no particular order:

    1. The exposed host problem cannot occur on a wired bus LAN provided the following assumptions are true:

      - No host within the LAN has access to any system or network outside the LAN.1

      - All hosts within the network are trusted.2

      - No system outside the LAN has access to any host on the LAN.3

      These conditions create a closed circuit such that only the intended hosts have access to the data being transferred.4

      1: Why would such access have an effect on anything?

      2: Trusted to do what?

      3: Isn't that a little severe?

      4: Maybe, but how does that prevent the exposed host problem?

    2. No, it's not gonna expose because wired bus LAN is point to point connection,5 also, there is no congestion on the wired buss LAN because data will flow easily.6

      5: What is it about being point-to-point that prevents the exposed host problem?

      6: This is tautological. If there is no congestion than data flows easily and vice versa. But why should there be no congestion?

    3. [ not answered ]
    4. The exposed host problems cannot occur in the wired bus LAN because when the host sends some packet through a wired bus LAN the packets are not expected[?] from the host and are delivered in the LAN on lines[?] with the help of bus LAN which is wired.7

      7: What about all of this avoids the exposed host problem?

    5. The exposed host problem cannot occur on a wired bus LAN. This is because that LAN's bridge fails,8 the host is unable to forward information to the receiver,9 so the host cannot expose itself to others.10

      8: Where did a bridge come from?

      9: Not even if the host and receiver are on the same side of the bridge?

      10: What about all other hosts on the same segment?

    6. Exposed problem occurs when there is an interference area between two senders due to each sender's range. (B and C) and B wants to send to A and C wants to send to D. [ diagram ]

      On a wired bus LAN, that would not be a problem since the link will manage transmissions between senders11 and there is no outgoing signals.12

      11: How does the link manage these transmissions?

      12: Outgoing signals from where? To where?

    7. [ not answered ]
    8. No, the exposed host problem cannot occur on a wired bus LAN because there won't be any chance of overlap between two wired bus LANs.13

      13: But what about two hosts on the same LAN?

  3. Link-state routing algorithms react more quickly to bad news than do distance-vector routing algorithms, but in some sense it could be argued that distance-vector algorithms react more quickly to good news than do link-state algorithms. Explain the argument, and evaluate the two advantages (that is, link-state reacting more quickly to bad news and distance-vector reacting more quickly to good news) to determine which, if either, is more important.


    Distance-vector sends and receives optimal routes. A node receiving optimal routes immediately incorporate them (or not) in its routing table. Link-state send and receives network topologies. A node receiving a network topology incorporates it in its graph, and runs a shortest path algorithm. These steps take longer than distance-vector integration, and so link-state reacts more slowly to good news (improved routes) than does distance-vector. Link-state's quicker reaction to bad news is more valuable than distance-vector's quicker reaction to good news. The distance-vector advantage for good news is small, while the link-state advantage for bad news is large.

    The answers, approximately verbatim, and in no particular order:

    1. With link-state algorithms, the argument for it reacting to good news more quickly is based on that each node talks to every other node,1 but the problem is that it can only tell them what it knows for sure.

      With the distance vector, each node talks to only its direct [?] neighbors, but it tells them everything it knows.

      Based on this, link-state algorithms are the better of the two because it can get more information from different nodes, which can give it a more rounded information.

      1: Is that true? Or does link-state have adjacent neighbors communicate?

    2. In link state algorithm oscillations occur because all the states end up to infinity2 eventually and the network keeps on oscillating.3

      Distance-vector algorithm converges slowly and it also suffers from count to infinity problem.

      Dx(y, z) = C(x, z) + in w { D2(y, w) }.4 Distance table is maintained at each node, it5 is iterative, asynchronous and distributed.

      The bad news travels slowly in distance vector algorithm because it updates tables one by one and takes a while.

      [ diagram ] In fig. 1 if the cost of xy is changed from 4 to 1, then it takes only two iterations6 to update the tables. But if the cost increases it takes long time for it to recognize. So good news spreads fast but bad news spreads slow in distance-vector routine.

      2: Why would all link states go to infinity? Why not just the states associated with bad links?

      3: What would be oscillating?

      4: What does this equation mean?

      5: To what does “it” refer? The table? If so, how could the table be iterative, asynchronous and distributed?

      6: Two iterations of what, the link-state or distance-vector algorithm?

    3. Distance-vector algorithms react more quickly to good news since it works by passing routing lists to all routes so if cost or out-link changes between any routes to destination it will be passed quickly whereas link-state algorithms react more quickly to bad news since it works by exchanging piece of network topology so if any router is taken out of the network that will passed quickly. Link-state reacting more quickly to bad news is more important because bad news should be passed quickly if not it leads to count to infinity or any other problems.
    4. Link state reacting more quickly to bad news is advantageous because they compute shortest path algorithms based on graph. Therefore when it receives the news that there are multiple paths to get somewhere it will find the shortest path turning the bad news that there are many paths into an advantage for itself because it can calculate the shortest path.

      Distance vector reacting more quickly to good news is advantageous because when router receives updated lists it will see this as good because distributing it will allow other routers to be able to form paths to each other.
    5. Link state routing algorithm is based on the idea that all nodes and routes in a network are known and they all exchange information while in distance-vector each node knows its direct neighbors. Since all nodes know each other in link state, when a path down or increases in cost, bad news, all nodes will know that. So the new cost will be sent to all nodes. On the other hand when a path down in distance-vector, only direct neighbors will know that while other nodes keep sending without knowing that bad news and that's true for link costs, which causes count to infinity problem.7

      Also distance vector suffers from another bad problem which is count to infinity and that happens when a path is down and only direct neighbors know that while other nodes keep sending until it becomes like a loop.

      Good news like low cost is known quickly in distance vector since each node knows its neighbors, so when the cost is dropped that news delivers quickly between neighbors.8

      7: This is why link-state is faster than distance-vector, but the question asked about how distance-vector might be faster than link-state.

      8: But is it more quickly than is done by link-state? Why?

    6. Link-state routing algorithms react more quickly to bad news because of:

      It9 has a time to live in case of bad news happened like (corruption or crashes).10

      Distance vector routing algorithms react more quickly to good news because of:

      It makes routers act awarely by the path aware because it's learning the network graph.11

      Which important:

      I think link state routing algorithms are important than distance vector routing algorithms because it can deal with bad news or problems.12 Also, dealing with errors is important than improving performance like distance vector algorithms do.

      9: To what does “it” refer?

      10: How does a time-to-live help link-state react more quickly? And is this the reason link-state reacts more quickly to bad news?

      11: Do you mean it directly learns the network graph, or it indirectly learns optimal routes?

      12: But so does distance-vector, doesn't it?

    7. (LS) = Link-state \ (DV) = Distance-vector

      LS routing algorithms wake a long time to set up,13 but once they are initialized, they store full maps of networks for easy routing + rerouting. Problem: if the system crashes, it takes more time/effort to set-up again.14 If there is a problem, however, LS routing algorithms can reroute much faster with its omniscience.15

      DV routing algorithms are iterative.16 Nodes store information from their neighbors about path costs and update when problems occur. If there are no problems (i.e., good news), they react quicker.

      Which is better is circumstantial. In cases like WiFi (error-prone), LS is better for damage control. For more stable (e.g., wired) networks, DV is better for throughput. Neither scales with size.

      13: “Long time” means what?

      14: More time than distance-vector? Why is that?

      15: Surely an omniscient algorithm would reroute much faster, but is link-state omniscient.

      16: But isn't link-state iterative too?

    8. Link state routing algorithms react more quickly to bad news is more common than distance vector routing algorithm because they discover their neighbors [?] and cost to the neighbor and broadcast neighbor information to the router and add received neighbor information to the graph and the distance vector algorithm reacts to good news quickly because they distribute shortest path to each router and they periodically exchange their lists with their adjacent routers and each router updates its list.17

      17: And the relative value of the advantages are...?

  4. Congestion is usually considered a problem in the layers above the data-link layer, but several phenomena in the data-link layer can be looked at as a form of congestion. Give two examples, including the appropriate congestion control mechanism for each example.


    Unfair link scheduling can cause starvation for unfavored hosts, which acts as a form of congestion. Using fair link scheduling removes this congestion. A more traditional form of congestion can occur at each node when frames arrive at a greater rate than they can be sent. In this case excess frames can be discarded, or back-pressure can be sent up the protocol stack to reduce the arrival rate.

    The answers, approximately verbatim, and in no particular order:

    1. For the congestion to be prevented data-link layer1 uses

      1 feedback-based flow control and

      2 Rate-based flow control2

      Feedback based flow control protocols are

      1 Stop and wait protocol: sender waits for acks.

      2 Sliding window protocol: The ack is being sent with the next frame which is known as piggybacking.

      3 Go-back-n.

      4 Selective repeat is used for congestion control.

      1: Is it possible for congestion to occur at the data-link layer? If so, what would it look like?

      2: Are flow control and congestion control the same thing?

    2. Frames could collide, meaning that a node receives multiple frames at the same time. Random access protocols (such as slotted ALOHA) can be employed to force colliding packets to each wait a random amount of time before retransmitting.3

      Multiple nodes could flood the network by sending frames at the same time.4 reservation protocols + “taking-turns protocols” can be employed to help alleviate the stress this puts on the network by forcing nodes to take turns sending frames.

      3: Isn't this more like random back-off?

      4: Isn't this the same problem as in the previous paragraph?

    3. With congestion in the data-link layer two forms of congestion would be framing error and bit error.5 To control the congestion the two appropriate congestion mechanism would be the byte stuffing method of the bit stuffing mechanisms.6

      5: How do framing and bit errors have to do with resource exhaustion?

      6: They are certainly part of a larger framing scheme, but how do stuffing mechanisms deal with framing and bit errors?

    4. Congestion: The router7 can handle M bits/sec maximum so congestion occurs when σj ij bits/sec > M or when σj ij, 4 bits/sec > M4.8

      To control congestion you could only admit traffic when the network can handle it without congestion or use multi-path routing. Or random early detection9 drop packets at random in near congestion conditions.

      7: Is there a router at the data-link layer?

      8: What does this mean for the data-link layer?

      9: Why is this an appropriate mechanism to use at the data-link layer?

    5. Congestion occurs when the traffic outstrips the supply. The congestion mechanism phenomena examples in the data-link layers are flow control10 and stop and wait11 and the appropriate congestion control mechanisms are traffic throttling congestion [?] and sliding window flow control.

      10: But the question is asking about congestion control, not flow control.

      11: But isn't stop-and-wait an example of a flow-control mechanism?

    6. 1 Traffic throttling when too much traffic12 cut back the traffic.13 The mechanism is recognizing congestion.

      2 Long fast networks. The mechanism is hop-by-hop backpressure14 which is that each router forward a choke packet to devote extra buffer space15 to the subsequent flow. Then the congestion relief flows backwards to the sender.

      12: Is it possible to have too much traffic at the data-link layer? If so, what would it look like?

      13: What mechanism would cut back traffic at the data-link layer?

      14: How many hops are there at the data-link layer?

      15: Buffer space where?

    7. 1 If there is much load in the network that cannot be handled by existing routers,16 but packets cannot be lost and efficiency should not be decreased in that case use network provisioning congestion control mechanism.

      2 A low load handling network but at times it should handle much more load than its capacity17 in that case use admission control congestion control mechanism.

      If connectionless services is used then congestion control is not guaranteed by higher layers in that case congestion can be controlled in data-link layer.18

      16: Do routers exist at the data-link layer?

      17: Where would be the source of such overload?

      18: How can the data-link layer deal with higher-level congestion? What tools can it apply at what locations to deal with higher-level congestion?

    8. Congestion is meant to be a problem in the network.19 When many hosts use a single shared link that might cause congestion along with collision.

      Another example20 is in ALOHA protocol. In pure ALOHA protocol, congestion can occur since this protocol doesn't check whether the channel is busy or not before sending.21

      It's22 also not divided into time and that can cause congestion.

      Also, resending collapsed frame without waiting can cause congestion.

      19: True, but the question is asking about the data-link layer part of the network.

      20: Is this another example of a congestion source or a congestion relief mechanism?

      21: How is this example different from the previous example? They both seem to involve collisions.

      22: To what is “it” referring?

  5. Explain why byte stuffing needs an escape mechanism but bit stuffing doesn't.


    Byte framing values should be unique to the frame, but may appear in the payload. Because the byte is the smallest transmission unit, a payload framing value can't be broken up and must be proceeded by another unique framing value, the escape value, indicating the following value should be ignored. Unique bit framing values can also appear in the payload. Because the bit is the smallest transmission unit, a well-placed bit of the appropriate kind is enough to invalidate the payload framing value.

    The answers, approximately verbatim, and in no particular order:

    1. Byte stuffing needs an escape mechanism to neutralize payload flag-byte values but bit stuffing doesn't require escape mechanism because in bit stuffing we stuff/add a ‘0’ for every sequence of 5 1 bits.1 So, for 5 bits we are adding (1) bit, even if that ‘0’ is already present in bit string we can easily identify which one bit is stuffed using bit stuffing but whereas in byte stuffing we cannot identify which byte is stuffed (if sequence of some bytes appear in string), and which is already present in the bit string.

      1: What is it about bit stuffing that makes this approach possible (and not possible in byte stuffing?

    2. An escape byte is needed for byte stuffing because the flag byte could occur in the payload, which causes confusion. The escape byte precedes the flag byte, neutralizing the value.

      In bit stuffing, the frame flag is typically 011111102. In the frame, a 0 is “stuffed” after every five consecutive 1s. Since the flag bit has six 1s, it can never be confused with frame data.
    3. Byte stuffing needs an escape mechanism to tell readers to ignore next flag byte, a frame-unique value.2 Successive flag bytes mark the start of a frame. Bit stuffing doesn't need an escape mechanism because it uses high-level data link control. Using HDLC every sequence of 5 1 bits is followed or stuffed with a 0 (this is invisible to outside the data link layer).3

      2: But why would readers need to be told that?

      3: What is it about HDLC that makes this bit manipulation possible?

    4. Byte stuffing needs an escape mechanism to avoid confusion that might happen because the flag byte occurs in the payload. So, it uses escape byte to neutralize payload flag byte values. As a result, it precedes a payload flag byte value with an escape byte.4

      4: And bit stuffing?

    5. Because the flag byte occurs in the data it interferes with the framing when the data is being transmitted. Then to solve this problem an ‘esc’ byte is added before flag byte.

      So by adding esc byte a framing flag byte can be distinguished from the one in the data.

      Bit stuffing doesn't need an esc byte because it automatically inserts a ‘0’ after every consecutive 1s.5

      5: But why can't byte stuffing do the same thing?

    6. With byte stuffing, the need for an escape mechanism is that in byte oriented protocols, information from the [?] can leak into the data frame.6

      Bit stuffing on the other hand, only [?] the frame as a collection of bits, so there is no need for an escape mechanism.7

      6: Isn't the information already in the data frame?

      7: Why? Is it because information can't leak into the data frame?

    7. Byte stuffing and bit stuffing are two methods.8 In bit stuffing we add a zero after 5 ones at the sender side. At the receiver's side, the data will be checked and bits that were stuffed (zeros) will be deleted. In bit stuffing, we don't need escape mechanism since data is either 0 or 1, and the sender's job is to insert zeros into the data. We use byte stuffing because in some cases the flag pattern can occur in any area.9 In the packet moreover it can occur in the data field so we need an escape, while that is not true in bit stuffing.

      8: True.

      9: But isn't that the same problem in bit stuffing? Why can't the same solution be used in byte stuffing?

    8. Byte stuffing needs an escape byte to neutralize payload flag byte values when escape byte is a frame unique byte value.10 Byte stuffing may double the frame size and when the payload isn't byte oriented bit stuffing is used when the frame flag is still a frame unique byte value has a bit pattern whence an escape mechanism isn't needed for the bit stuffing.

      10: Isn't the escape byte always a frame-unique value?

  6. Reverse path forwarding can be used to make sure broadcast packets don't collapse the network under exponential growth. Explain how.


    Reverse path forwarding keeps track of incoming links for packets with new addresses, assuming that link is the best link to use when delivering to the new address. A similar interpretation can be used for broadcast packets. The first link delivering a new broadcast packet can be considered the shortest route back to the sender. The same packet showing on another link is considered a retransmission from some other node and can be discarded.

    The answers, approximately verbatim, and in no particular order:

    1. [ not answered ]
    2. Reverse path forwarding can be used to make sure broadcast packets can't collapse the network under exponential growth because if one router (x) finds ‘k’1 has best way to get to other router (y) then that second router (y) concludes that ‘k’ is the best way to get to x so the reverse way finding the route is not required.2

      1: What is k? Another router? Or maybe an out-link?

      2: But how does that quell exponential growth?

    3. Broadcast packets are sent to each neighboring node typically when received. Reverse path forwarding (RPF) only allows the sending node to forward the packet to all of its neighbors (except the one form which it just received the packet).3 If the packet arrived on the link that is on its4 own shortest path back to the source node it will not forward a packet otherwise, since it may have already received it or will in the future. Since packets are not blindly broadcast across the network each time they are received, flooding is less likely to occur, and the network is less likely to collapse under this exponential growth.

      3: But isn't that just broadcasting?

      4: To what does the “it” refer? The packet, or the sender, or to something else?

    4. In reverse path forwarding the path from the goal node is computed.5 In this way the broadcast packets don't collapse the network.6

      In the forward path, the broadcast pockets go around the network (which is exponential) until it reaches the destination. There is no destination it collapses the network.7

      But in reverse path forwarding it8 knows the goal node and thus it doesn't have to go exponentially like forward path.

      5: But in broadcasting every node but the sender is the goal node.

      6: What is it about the reverse-path forwarding computation prevents exponential growth?

      7: How is it possible to have a broadcast with no destination?

      8: To what does “it” refer?

    5. RPF can be used to make sure broadcast packets don't collapse the network under exponential growth by transmitting the packet on all of its outgoing links (except that one it was received on). This will prevent the packet from going from router to another to another which would collapse the network because the cost will be growing exponentially.9 However if sent on all outgoing links its guaranteed to make it to the correct destination.10

      9: But blindly forwarding a packet an almost all outgoing links sounds like a recipe for exponential growth.

      10: Where is reverse-path forwarding in all this?

    6. With reverse-path forwarding, the way it is used to keep the broadcast packets from collapsing the network under exponential growth is when[?] in the TCP mechanism,11 it holds out on sending acks back to the sender to allow the receiver to make sure the network is being overused.12

      11: TCP? How does TCP get involved?

      12: Is that how reverse-path forwarding works?

    7. Reverse path forwarding can be used to make sure that the broadcast packets don't collapse the network under exponential growth because [?] send acknowledgments back13 and the path forwarding is done when the packet are completely sent14 to the [?] if they stop or collapse [?] are sent back and are again sent to the receiver. In this way reverse path forwarding can be used.

      13: How does reverse-path forwarding exploit acknowledgments? And why would acknowledgments be sent for a broadcast?

      14: What does it mean to be completely sent? And what is being completely sent? And how is it determined that something has been completely sent?

    8. In reverse path forwarding (RPF) a router transmits a packet on all other outgoing links except to where the packet received from only if the packet arrived on the link that's on the router's shortest unicast.15 Otherwise, the packet will be discarded or dropped without forwarding. That simple mechanism prevents exponential growth of the network. Another reason16 is that in RPF a router doesn't heed to a complete shortest path to the source.17 Neighbors need to be known only. The packet dropped because either it will be received by the router or it was already delivered.

      15: But how does the router know the shortest unicast path?

      16: Another reason for what?

      17: No router does, does it? It just needs the next hop in the shortest path.


This page last modified on 2012 November 25.