Comp 443 Computer Networks Final Exam Study Exercises - Solutions Numbering: just one number if it's the same for the 3rd and 4th editions; otherwise 3rd/4th Chapter 4 8. The ident field is 16 bits, so we can send 576 * 2^16 bytes per 60 sec, or about 5 Mbps. If we send more than this, then fragments of one packet could conceivably have the same Ident value as fragments of a different packet. 14(a). If multiple packets after the first arrive at the IP layer for outbound delivery, but before the first ARP response comes back, then we send out multiple unnecessary ARP packets. (b). We should maintain a list of currently outstanding ARP queries. Before sending a query, we first check this list. 17. Confirmed Tentative 1. (D,0,-) 2. (D,0,-) (A,8,A) (E,2,E) 3. (D,0,-) (A,8,A) (E,2,E) (B,4,E) (C,3,E) 4. (D,0,-) (A,6,E) (E,2,E) (B,4,E) (C,3,E) (F,9,E) 5. (D,0,-) (A,6,E) (E,2,E) (F,9,E) (C,3,E) (B,4,E) 6. previous + (A,6,E) 7. previous + (F,9,E) 30 (a). This could happen if the link changed state recently, and one of the two LSP’s was old. (b) If flooding is working properly, and if A and B do in fact agree on the state of the link, then eventually (rather quickly) whichever of the two LSP’s was old would be updated by the same sender’s newer version, and reports from the two sides of C would again agree. 31 (a) Q will receive three routes to P, along links 1, 2, and 3. (b) A-->B traffic will take link 1. B-->A traffic will take link 2. Note that this strategy minimizes cost to the source of the traffic. (c) To have B-->A traffic take link 1, Q could simply be configured to prefer link 1 in all cases. The only general solution, though, is for Q to accept into its routing tables some of the internal structure of P, so that Q for example knows where A is relative to links 1 and 2. 33. (a) The diameter D of a network organized as a binary tree, with root node as "backbone", would be of order log2 A. The diameter of a planar rectangular grid of connections would be of order sqrt(A). (b) For each AS S, the BGP node needs to maintain a record of the AS PATH to S, requiring 2*actual_path_length bytes. It also needs a list of all the networks within S, requiring 4*number_of_networks bytes. Summing these up for all autonomous systems, we get 2AD+4N, or 2AC(log A)+4N and 2AC*sqrt(A)+4N for the models from part (a), where C is a constant. 34b: Ethernet bridges learn locations from the *source* field as they process forwarding packets to the *destination*. The main difference is that they are *able* to process packets with unknown destination, by falling back to "broadcast mode". IP routers do not have that option; they must have their forwarding tables complete in advance of being able to handle traffic. 39. A can reach B and D but not C. A "thinks" that B and C are local, and attempts to send directly to their ethernet addresses. The packet sent to B's ethernet address is bridged by RB, and so is delivered, but the packet sent to C's Ethernet address is NOT forwarded by R1. R1 doesn't really even see the packet; routers forward only packets addressed at the link level directly to them. A knows that D is nonlocal, and so sends directly to R2. Like the packet to C, though, this packet is forwarded by RB. Actually, A would not even *have* the Ethernet addresses for C, B, and R2, initially. So it would attempt to use ARP. This would succeed for B and R2, but not for C; R1 would certainly not forward ARP packets. 45. (a):B (b):A (c):E (d):F (e):C (f):D 47ab. P's table: address nexthop C2.0.0.0/8 Q C3.0.0.0/8 R C1.A3.0.0/16 PA C1.B0.0.0/12 PB Q's table: address nexthop C1.0.0.0/8 P C3.0.0.0/8 R C2.0A.10.0/20 QA C2.0B.0.0/16 QB R's table: address nexthop C1.0.0.0/8 P C2.0.0.0/8 Q (b): The same, except for the following changes of one entry each to P's and R's tables: P: C3.0.0.0/8 Q // was for R R: C1.0.0.0/8 Q // was for P 48. The longest-match rule is intended for this. Note that *all* providers now have to include entries for PA and QB though. 54(b). If N has its own IP network number, then R1 does announce its route to N to the world. R1 would not necessarily announce its route to A, however. R2 would not change: it would still announce N into A. Assuming A does not announce its route to N into its provider, all external traffic to and from N now goes through the new link, and N-A traffic goes through R2. ========================================================== Chapter 5: 5. The two-segment-lifetime timeout results from the need to purge old late duplicates, and uncertainty of the sender of the last ACK as to whether it was received. For the first issue we only need one connection endpoint in TIMEWAIT; for the second issue, a host in the LAST_ACK state expects to receive the last ACK, rather than send it. 20. (a) T=0.0 ‘a’ sent T=1.0 ‘b’ collected in buffer T=2.0 ‘c’ collected in buffer T=3.0 ‘d’ collected in buffer T=4.0 ‘e’ collected in buffer T=4.1 ACK of ‘a’ arrives, "bcde" sent T=5.0 ‘f’ collected in buffer T=6.0 ‘g’ collected in buffer T=7.0 ‘h’ collected in buffer T=8.0 ‘i’ collected in buffer T=8.1 ACK arrives; "fghi" sent (b) The user would type ahead blindly at times. Characters would be echoed between 4 and 8 seconds late, and echoing would come in chunks of four or so. Such behavior is quite common over telnet connections, even those with much more modest RTTs, but the extent to which this is due to the Nagle algorithm is unclear. (c) With the Nagle algorithm, the mouse would appear to skip from one spot to another. Without the Nagle algorithm the mouse cursor would move smoothly, but it would display some inertia: it would keep moving for one RTT after the physical mouse were stopped. 22. Whichever endpoint remains in TIMEWAIT must retain a record of the connection for the duration of TIMEWAIT; as the server typically is involved in many more connections than clients, the server's recordkeeping requirements would be much more onerous. 33.(a) The first incarnation must have closed successfully and the second must have opened; this implies the exchange of FIN and SYN packets and associated ACKs. The delayed data must also have been successfully retransmitted. (b) The most plausible scenario involves use of two parallel links, one of which developed massive congestion causing subsequent traffic to be routed via the other link. 36(a). One would now need some sort of connection number assigned by the client side to play the role of the port number in demultiplexing traffic; with this in place, headers might not change much at all. Client and server sockets will now be fundamentally different objects. Server sockets would be required to bind() themselves to a port number (perhaps at creation time); clients would be forbidden to do this. (b). We still need to make sure that a client connection number is not reused within the 2xMSL period, at least not with the same server port. However, this is now a TCP-layer responsibility, not an application concern. 39. Incrementing the Ack number for a FIN is essential, so that the sender of the FIN can determine that the FIN was received and not just the preceding data. For a SYN, any ACK of subsequent data would increment the acknowledgement number, and any such ACK would implicitly acknowledge the SYN as well (data cannot be ACKed until the connection is established). Thus, the incrementing of the sequence number here is a matter of convention and consistency rather than design necessity. 41: If a crash occurs in the middle of a connection, it leads to a "half-open" connection where one endpoint has lost all state regarding the connection. Because TCP (unlike BLAST) keeps such connection state, the broken connection will be discovered as soon as the other side sends anything; the crashed side will respond with RST. Another possibility is for one host, say A, to send a SYN, and then crash, and then send another SYN for a new connection. Normally this is detected because the second SYN will have a different ISN, but even if not the problem is not terribly serious. ========================================================== Chapter 6: 5 Yes, particularly if the immediate first link is high-bandwidth, the first router has a large buffer capacity, and the delay in the connection is downstream. CongestionWindow can grow arbitrarily; the excess packets will simply pile up at the first router. 13 (a) With round-robin service, we will alternate one telnet packet with each ftp packet, causing telnet to have dismal throughput. (b) With FQ, we send roughly equal volumes of data for each flow. There are about 552/41 ~ 13.5 telnet packets per ftp packet, so we now send 13.5 telnet packets per ftp packet. This is better. (c) We now send 512 telnet packets per ftp packet. This excessively penalizes ftp. Note that with the standard Nagle algorithm a backed-up telnet would not in fact send each character in its own packet 16(a) In slow start, the size of the window doubles every RTT. At the end of the ith RTT, the window size is 2^i KB. It will take 10 RTTs before the send window has reached 2^10 KB = 1 MB. (b) After 10 RTTs, 1023 KB = 1 MB - 1 KB has been transferred, and the window size is now 1 MB. If the RWS is 1MB, we do *not* continue to increase the actual window size, and so the transfer takes another 9 RTTs, one for each of the remaining 9 MB of file (technically we're 1KB short, so maybe there will be a 10th RTT for that last 1KB). If the RWS is 10MB, then cwnd never stops increasing. At the end of the 13th RTT the window size is 8MB, and a total of almost 8MB has been transferred. During the next RTT, though, there are only 2MB of file left, and so that is all that is sent (not a full window of 8MB). The last four RTTs transfer 1, 2, 4, 2 MB respectively. (c). In the RWS=10MB case, it takes 1.4 seconds (14 RTTs) to send the file. The effective throughput is (10MB / 1.4s) = 7.1MBps = 57.1Mbps. This is only 5.7% of the available link bandwidth. 17 Let the sender window size be 1 packet initially. The sender sends an entire windowful in one batch; for every ACK of such a windowful that the sender receives, it increases its effective window (which is counted in packets) by one. When there is a timeout, the effective window is cut into half the number of packets. Now consider the situation when the indicated packets are lost. The window size is initially 1; when we get the first ACK it increases to 2. At the beginning of the second RTT we send packets 2 and 3. When we get their ACKs we increase the window size to 3 and send packets 4, 5 and 6. When these ACKs arrive the window size becomes 4. Now, at the beginning of the fourth RTT, we send packets 7, 8, 9, and 10; by hypothesis packet 9 is lost. So, at the end of the fourth RTT we have a timeout and the window size is reduced to 4/2 = 2 . 21/22(a). Here is the table of events with TimeOut = 2 sec. I made a mistake here; the table has timeouts handled *before* ACKs; that is, at T=4, when Ack3 arrives and Data4 timeouts, I time out and resend Data4 (and set cwnd=1). If I followed the instructions, I would process the ACK and send Data7 and Data8, and *then* handle the timeout. In that case, Data7 would go into the queue, but Data8 and the resent Data4 would be lost. There is no idle time on the R--B link. Time A recvs A sends R sends cwnd size 0 Data0 Data0 1 1 Ack0 Data1,2 Data1 2 2 Ack1 Data3,4 (4 dropped) Data2 3 3 Ack2 Data5,6 (6 dropped) Data3 4 4 Ack3/timeout4 Data 4 Data5 1 5 Ack3/timeout5&6 nothing Data4 1 6 Ack5 Data 6 Data6 1 7 Ack 6 Data7,8 (slow start) Data7 2 23/24. The router is able in principle to determine the actual number of bytes outstanding in the connection at any time, by examining sequence and acknowledgement numbers. This we can take to be the congestion window except for immediately after when the latter decreases. Slow start after a coarse-grained timeout is trickier. The main problem is that the router has no way to know when such a timeout occurs; the TCP might have inferred a lost packet by some other means. After any packet is retransmitted, however, we should see the congestion window fall at least in half. This amounts to verifying multiplicative decrease, though, not slow start. 26/27. If the receiver sends ACKs earlier, it causes the sender to open cwnd faster, thus creating increased throughput. One possible way to catch such a receiver would be for the sender to check, for every packet, if it was ACKed before it was sent. Unless the receiver is very careful about timing measurements, it is likely that the receiver would sooner or later send an ACK before the packet was even sent, and thus be caught out. 27/28. The new feature is fast retransmit; you can tell that at about T=5.5 because there is not a full timeout. Fast recovery, though, is still not shown. 30/31. Suppose the first two connections keep the queue full 95% of the time, alternating transmissions in lockstep and timed so that their packets always arrive just as a queue vacancy opens. Suppose also that the third connection's packets happen always to arrive when the queue is full. The third connection's packets will thus be lost, whether we use slow start or not. The first two connections will not be affected. 33/34 Marking a packet allows the endpoints to adjust to congestion more efficiently -- they may be able to avoid losses (and timeouts) altogether by slowing their sending rates. However, transport protocols must be modified to understand and account for the congestion bit. Dropping packets leads to timeouts, and therefore may be less efficient, but current protocols (such as TCP) need not be modified to use RED. Also, dropping is a way to rein in an ill-behaved sender. 42/43 (a). If we send 1 packet, then in either case we see a 1 sec RTT. If we send a burst of 10 packets, though, then in the first case ACKs are sent back at 1 sec intervals; the last packet has a measured RTT of 10 sec. The second case gives a 1 sec RTT for the first packet and a 2 sec RTT for the last.