There are six questions; answer at least five of them. You have one hour, from 6 to 7.
Up to the first 75 words appearing after a question are taken to be the answer to the question. Words following the 75th word are not part of the answer and will be ignored.
To send decimal numbers with error detection (correction not needed), pick positive integer divisor G producing k-digit remainders, k = ⌈log10 G⌉. Take message N, multiply by 10k to get N′. Divide N′ by G to get remainder r and subtract r from N′ to get N′′, a number evenly divisible by G. Send N′′. Divide received number m by G. A non-zero remainder means the message was corrupted during transmission.
The answers, approximately verbatim, and in no particular order:
1: Ok, but what do the operations look like? What purpose do the achieve?
2: How many 0s? How is the number determined? What purpose do the 0s serve?
3: Which 1? There's a lot of them.
4: How does the sender do this? And after the sender does this, what happens then?
5: Mod-10 arithmetic? How does that work?
6: Why? What has gone wrong to produce this error?
1 If all single bit errors when the xk8 and x0 terms have coefficient 1.9
2 All double bit errors when C(x) contains a factor.3 Any odd number of errors when C(x) contains the factor x + 1.
4 Any error burst of length less than k bits.7: What is C(x)?
8: What is k?
9: What does this mean? How does it work?
10: How do the nines help? And how does the cyclic redundancy check work?
11: And then what happens?
Then we get a remainder, R. By subtract R from received frame[?] we get T. If G divides T, then there is no error. If G doesn't divide T, then there is error.12
Suppose received number is 210,278 and G is 10,941. By dividing 10,941 210,278, R → 2399. 210,278 - 2399 = 207,379 is divisible by 10,941, then no error.12: Is the subtraction done at the sending end or the receiving end?
No. Adjacent senders collide immediately, and there's no appreciable attenuation with distance. Once a packet collides it's gone, and receivers, no matter where they're located, will unsuccessfully receive the result.
The answers, approximately verbatim, and in no particular order:
The exposed host problem cannot occur on a wired bus LAN provided the following assumptions are true:
- No host within the LAN has access to any system or network outside the LAN.1- All hosts within the network are trusted.2
- No system outside the LAN has access to any host on the LAN.3These conditions create a closed circuit such that only the intended hosts have access to the data being transferred.4
1: Why would such access have an effect on anything?
2: Trusted to do what?
3: Isn't that a little severe?
4: Maybe, but how does that prevent the exposed host problem?
5: What is it about being point-to-point that prevents the exposed host problem?
6: This is tautological. If there is no congestion than data flows easily and vice versa. But why should there be no congestion?
7: What about all of this avoids the exposed host problem?
8: Where did a bridge come from?
9: Not even if the host and receiver are on the same side of the bridge?
10: What about all other hosts on the same segment?
On a wired bus LAN, that would not be a problem since the link will manage transmissions between senders11 and there is no outgoing signals.12
11: How does the link manage these transmissions?
12: Outgoing signals from where? To where?
13: But what about two hosts on the same LAN?
Distance-vector sends and receives optimal routes. A node receiving optimal routes immediately incorporate them (or not) in its routing table. Link-state send and receives network topologies. A node receiving a network topology incorporates it in its graph, and runs a shortest path algorithm. These steps take longer than distance-vector integration, and so link-state reacts more slowly to good news (improved routes) than does distance-vector. Link-state's quicker reaction to bad news is more valuable than distance-vector's quicker reaction to good news. The distance-vector advantage for good news is small, while the link-state advantage for bad news is large.
The answers, approximately verbatim, and in no particular order:
With the distance vector, each node talks to only its direct [?] neighbors, but it tells them everything it knows.
Based on this, link-state algorithms are the better of the two because it can get more information from different nodes, which can give it a more rounded information.1: Is that true? Or does link-state have adjacent neighbors communicate?
In link state algorithm oscillations occur because all the states end up to infinity2 eventually and the network keeps on oscillating.3
Distance-vector algorithm converges slowly and it also suffers from count to infinity problem.Dx(y, z) = C(x, z) + in w { D2(y, w) }.4 Distance table is maintained at each node, it5 is iterative, asynchronous and distributed.
The bad news travels slowly in distance vector algorithm because it updates tables one by one and takes a while.[ diagram ] In fig. 1 if the cost of xy is changed from 4 to 1, then it takes only two iterations6 to update the tables. But if the cost increases it takes long time for it to recognize. So good news spreads fast but bad news spreads slow in distance-vector routine.
2: Why would all link states go to infinity? Why not just the states associated with bad links?
3: What would be oscillating?
4: What does this equation mean?
5: To what does “it” refer? The table? If so, how could the table be iterative, asynchronous and distributed?
6: Two iterations of what, the link-state or distance-vector algorithm?
Link state reacting more quickly to bad news is advantageous because they compute shortest path algorithms based on graph. Therefore when it receives the news that there are multiple paths to get somewhere it will find the shortest path turning the bad news that there are many paths into an advantage for itself because it can calculate the shortest path.
Distance vector reacting more quickly to good news is advantageous because when router receives updated lists it will see this as good because distributing it will allow other routers to be able to form paths to each other.Also distance vector suffers from another bad problem which is count to infinity and that happens when a path is down and only direct neighbors know that while other nodes keep sending until it becomes like a loop.
Good news like low cost is known quickly in distance vector since each node knows its neighbors, so when the cost is dropped that news delivers quickly between neighbors.87: This is why link-state is faster than distance-vector, but the question asked about how distance-vector might be faster than link-state.
8: But is it more quickly than is done by link-state? Why?
Link-state routing algorithms react more quickly to bad news because of:
It9 has a time to live in case of bad news happened like (corruption or crashes).10Distance vector routing algorithms react more quickly to good news because of:
It makes routers act awarely by the path aware because it's learning the network graph.11Which important:
I think link state routing algorithms are important than distance vector routing algorithms because it can deal with bad news or problems.12 Also, dealing with errors is important than improving performance like distance vector algorithms do.9: To what does “it” refer?
10: How does a time-to-live help link-state react more quickly? And is this the reason link-state reacts more quickly to bad news?
11: Do you mean it directly learns the network graph, or it indirectly learns optimal routes?
12: But so does distance-vector, doesn't it?
(LS) = Link-state \ (DV) = Distance-vector
LS routing algorithms wake a long time to set up,13 but once they are initialized, they store full maps of networks for easy routing + rerouting. Problem: if the system crashes, it takes more time/effort to set-up again.14 If there is a problem, however, LS routing algorithms can reroute much faster with its omniscience.15DV routing algorithms are iterative.16 Nodes store information from their neighbors about path costs and update when problems occur. If there are no problems (i.e., good news), they react quicker.
Which is better is circumstantial. In cases like WiFi (error-prone), LS is better for damage control. For more stable (e.g., wired) networks, DV is better for throughput. Neither scales with size.13: “Long time” means what?
14: More time than distance-vector? Why is that?
15: Surely an omniscient algorithm would reroute much faster, but is link-state omniscient.
16: But isn't link-state iterative too?
17: And the relative value of the advantages are...?
Unfair link scheduling can cause starvation for unfavored hosts, which acts as a form of congestion. Using fair link scheduling removes this congestion. A more traditional form of congestion can occur at each node when frames arrive at a greater rate than they can be sent. In this case excess frames can be discarded, or back-pressure can be sent up the protocol stack to reduce the arrival rate.
The answers, approximately verbatim, and in no particular order:
1 feedback-based flow control and
2 Rate-based flow control2Feedback based flow control protocols are
1 Stop and wait protocol: sender waits for acks.2 Sliding window protocol: The ack is being sent with the next frame which is known as piggybacking.
3 Go-back-n.4 Selective repeat is used for congestion control.
1: Is it possible for congestion to occur at the data-link layer? If so, what would it look like?
2: Are flow control and congestion control the same thing?
Frames could collide, meaning that a node receives multiple frames at the same time. Random access protocols (such as slotted ALOHA) can be employed to force colliding packets to each wait a random amount of time before retransmitting.3
Multiple nodes could flood the network by sending frames at the same time.4 reservation protocols + “taking-turns protocols” can be employed to help alleviate the stress this puts on the network by forcing nodes to take turns sending frames.3: Isn't this more like random back-off?
4: Isn't this the same problem as in the previous paragraph?
5: How do framing and bit errors have to do with resource exhaustion?
6: They are certainly part of a larger framing scheme, but how do stuffing mechanisms deal with framing and bit errors?
Congestion: The router7 can handle M bits/sec maximum so congestion occurs when σj ij bits/sec > M or when σj ij, 4 bits/sec > M4.8
To control congestion you could only admit traffic when the network can handle it without congestion or use multi-path routing. Or random early detection9 drop packets at random in near congestion conditions.7: Is there a router at the data-link layer?
8: What does this mean for the data-link layer?
9: Why is this an appropriate mechanism to use at the data-link layer?
10: But the question is asking about congestion control, not flow control.
11: But isn't stop-and-wait an example of a flow-control mechanism?
2 Long fast networks. The mechanism is hop-by-hop backpressure14 which is that each router forward a choke packet to devote extra buffer space15 to the subsequent flow. Then the congestion relief flows backwards to the sender.
12: Is it possible to have too much traffic at the data-link layer? If so, what would it look like?
13: What mechanism would cut back traffic at the data-link layer?
14: How many hops are there at the data-link layer?
15: Buffer space where?
2 A low load handling network but at times it should handle much more load than its capacity17 in that case use admission control congestion control mechanism.
If connectionless services is used then congestion control is not guaranteed by higher layers in that case congestion can be controlled in data-link layer.1816: Do routers exist at the data-link layer?
17: Where would be the source of such overload?
18: How can the data-link layer deal with higher-level congestion? What tools can it apply at what locations to deal with higher-level congestion?
Another example20 is in ALOHA protocol. In pure ALOHA protocol, congestion can occur since this protocol doesn't check whether the channel is busy or not before sending.21
It's22 also not divided into time and that can cause congestion.Also, resending collapsed frame without waiting can cause congestion.
19: True, but the question is asking about the data-link layer part of the network.
20: Is this another example of a congestion source or a congestion relief mechanism?
21: How is this example different from the previous example? They both seem to involve collisions.
22: To what is “it” referring?
Byte framing values should be unique to the frame, but may appear in the payload. Because the byte is the smallest transmission unit, a payload framing value can't be broken up and must be proceeded by another unique framing value, the escape value, indicating the following value should be ignored. Unique bit framing values can also appear in the payload. Because the bit is the smallest transmission unit, a well-placed bit of the appropriate kind is enough to invalidate the payload framing value.
The answers, approximately verbatim, and in no particular order:
1: What is it about bit stuffing that makes this approach possible (and not possible in byte stuffing?
An escape byte is needed for byte stuffing because the flag byte could occur in the payload, which causes confusion. The escape byte precedes the flag byte, neutralizing the value.
In bit stuffing, the frame flag is typically 011111102. In the frame, a 0 is “stuffed” after every five consecutive 1s. Since the flag bit has six 1s, it can never be confused with frame data.2: But why would readers need to be told that?
3: What is it about HDLC that makes this bit manipulation possible?
4: And bit stuffing?
So by adding esc byte a framing flag byte can be distinguished from the one in the data.
Bit stuffing doesn't need an esc byte because it automatically inserts a ‘0’ after every consecutive 1s.55: But why can't byte stuffing do the same thing?
With byte stuffing, the need for an escape mechanism is that in byte oriented protocols, information from the [?] can leak into the data frame.6
Bit stuffing on the other hand, only [?] the frame as a collection of bits, so there is no need for an escape mechanism.76: Isn't the information already in the data frame?
7: Why? Is it because information can't leak into the data frame?
8: True.
9: But isn't that the same problem in bit stuffing? Why can't the same solution be used in byte stuffing?
10: Isn't the escape byte always a frame-unique value?
Reverse path forwarding keeps track of incoming links for packets with new addresses, assuming that link is the best link to use when delivering to the new address. A similar interpretation can be used for broadcast packets. The first link delivering a new broadcast packet can be considered the shortest route back to the sender. The same packet showing on another link is considered a retransmission from some other node and can be discarded.
The answers, approximately verbatim, and in no particular order:
1: What is k? Another router? Or maybe an out-link?
2: But how does that quell exponential growth?
3: But isn't that just broadcasting?
4: To what does the “it” refer? The packet, or the sender, or to something else?
In the forward path, the broadcast pockets go around the network (which is exponential) until it reaches the destination. There is no destination it collapses the network.7
But in reverse path forwarding it8 knows the goal node and thus it doesn't have to go exponentially like forward path.5: But in broadcasting every node but the sender is the goal node.
6: What is it about the reverse-path forwarding computation prevents exponential growth?
7: How is it possible to have a broadcast with no destination?
8: To what does “it” refer?
9: But blindly forwarding a packet an almost all outgoing links sounds like a recipe for exponential growth.
10: Where is reverse-path forwarding in all this?
11: TCP? How does TCP get involved?
12: Is that how reverse-path forwarding works?
13: How does reverse-path forwarding exploit acknowledgments? And why would acknowledgments be sent for a broadcast?
14: What does it mean to be completely sent? And what is being completely sent? And how is it determined that something has been completely sent?
15: But how does the router know the shortest unicast path?
16: Another reason for what?
17: No router does, does it? It just needs the next hop in the shortest path.