Advanced TCP/IP networks midterm study guide solutions > Chapter 6 exercises that are worth at least reading: > 5, 6, 11, 15, 24, 25, 27, 28, 39a > 1. Explain what is meant by a "phase effect" in TCP, and explain why both > adding a small random delay to packet transmission time ("overhead") and > using random-drop queues both can reduce phase effects. Which of these > techniques reduces phase effects more, and why? Phase effects refer to effects that are peculiarities of exact timing. For example, a router with a full queue generates a "vacancy" on the queue whenever it finishes transmitting. If new packets arrivals are synchronized to arrive just after each new vacancy develops, then some other flow may be starved completely. Adding a random overhead time gives competing flows a chance to be "first" in getting a packet into the new vacancy. Random drop queues mean that if a packet arrives and the queue is full, then it is likely that some other packet entirely will be dropped; the drop victim is chosen at random from among the new arrival and all existing queue entries. Getting there first no longer matters at al. > 2. Give a timeline showing two packet drops in one TCP window, for both Reno > and NewReno implementations. Identify periods of congestion window "inflation" > (the upper end moves up but the lower end does not) and "deflation" (the lower end > moves up but the upper end does not). In class, we focussed more on descriptions of Reno and NewReno in terms of "flightsize" than cwnd. That said, in Reno the second drop would result in another window reduction, while in NewReno we would end up resending the second dropped packet without further reducing the effective window size. > 3. Explain why a longer RTT alone, with the same packet-loss risk, > can mean that a TCP connection gets less bandwidth. TCP windows are increased by one packet every RTT. With a larger RTT, the rate of window increase is thus slowed. > 4. Discuss how to arrange for geographical routing in IP. > Discuss how, using BGP or some other method, routers in adjacent towns on > opposite sides of a state line could send traffic to each other directly, > while each sends other traffic to their state's respective state hub. Geographical routing can be implemented by assigning each local network to a region, and having each local network forward all its nonlocal traffic to its regional hub. More levels of regional aggregation can be provided if desired. This means that if two hosts are near each other but are in different regions, then packets from one to the other will go the long way around: A--R1--R2--B rather than A--B. R1---|----R2 \ | / \ | / A-|-B To fix this, one approach would be to have regional border routers advertise the local networks nearby each exit point. Thus, A would not just have a default route pointing to R1; it would also have an explicit entry for B. This would allow traffic from A to B to take the direct route. Traffic from B to A would take the direct route if B placed A into its routing tables. Note that this has nothing to do with the BGP multi_exit_disc values; that would apply if the right side above wanted traffic from R2 to A to go via B. > 5. How might a TCP figure out the "path bandwidth", that is, the smallest bandwidth > along the path to the destination? > What use could a TCP make of this information? The usual technique for this is packetpairs: the sender periodically sends a pair of packets one immediately following the other. In many cases the router on the bottleneck link won't transmit these consecutively, since some other packet may have arrived in between them. However, in some cases the packets *will* be sent consecutively, and thus the difference in their arrival times will be determined just by the bandwidth of the bottleneck link. > 6. (a) How does TCP use IP addresses? > (b) Explain the "Network Address Translation" (NAT) approach to IP address assignment. > Explain the advantages to an organization, in terms of supporting change of provider. > Explain the changes that would have to be made to TCP. TCP uses IP addresses to keep track of the other end of each connection. A given port on a given host can be simultaneously connected to any of several other remote 's. If a packet arrives from (r=remote), and is addressed to the local , then is compared to each remote endpoint that is connected to . The data is delivered to the connection that matches. The NAT approach is for the routers to fill in the network portion of the IP address, so hosts only "know" their local host number (and possibly subnet number). This means that to change providers, an organization has only to reconfigure its NAT box to write in the new "locator" information. In terms of TCP, this means that the other end of any TCP connection from such a network is recording the NAT-generated host address, which is not known to that host! This is not a problem _per se_, but it becomes one if the NAT box changes the outgoing IP addresses on the fly. Packets may still be routed correctly, but the other end's TCP will see them as unrelated to the original connection. Proposed fixes to allow on-the-fly IP address changes with TCP include: * using only the low-order 8 bytes of the address for TCP (IPv6 only) * having the TCP layer learn, through explicit notification, of a remote endpoint address change * putting the original remote-endpoint address into a separate field, and not changing it as the actual address changes.