Chapters 5 and 6: TCP

These two chapters together focus on transport: specifically, on TCP ("Transport Control Protocol", the workhorse of the Internet). Chapter 5 deals with the actual mechanics of TCP (which is a sliding-window protocol similar to the ones described in section 2.5), while Chapter 6 deals with TCP's congestion-reduction mechanisms. (Chapter 6 deals with congestion generally, but we're only doing 6.3, on TCP congestion control.)

Section 5.1: UDP

This is about UDP, a best-effort (no acks, no bells and whistles) mechanism for sending a packet from A to B. About the only difference from IP itself is that UDP provides for a data checksum, and it also delivers data to a specified "port" on a specified host. A port (the same concept is used in TCP) is just a 16-bit number; at the operating system level, whenever you open a "socket" that "listens" for network packets, that socket is assigned (by either you or the operating system) a port number.

UDP is like postal mail to a mailbox: if two competing senders each send to a given port on a given third machine C, then the process receiving the packets will see them mixed together, in the same order as they arrived. TCP is more like the phone: the receiving process would first connect exclusively to one sender, and then, when that sender was done, it would connect to the second sender. Senders would send data one at a time.

Section 5.2: TCP

To be completed later
5.2.1: End-to-end issues
5.2.2: Packet format; sequence numbers
5.2.3: Connection setup and teardown; three-way handshake; state diagrams
5.2.4: TCP's flavor of sliding windows
5.2.5: Adaptive setting of the TimeOut interval
 

Study questions:

Exercises:

4    TCP state diagram
5     TCP state diagram
6    Probing of size-0 windows
9    Sequence-number issues
12   Duplicate SYN issues
18   Nagle algorithm
21    TIMEWAIT
25    Adaptive statistics for TimeOut
33     Simultaneous open

Section 6.3 - TCP Congestion control

TCP tries to monitor the connection for signs of congestion, and, when congestion is encountered, tries to reduce the flow in order to reduce congestion. Specifically, TCP interprets lost packets as congestion indicators. On a typical Ethernet, perhaps one packet in 100,000 might be lost purely to noise and error, but one in 10 might be lost due to congestion. Interpreting loss as a sign of congestion is usually a pretty safe bet.

TCP reduces the rate of flow by reducing the window size. Specifically, TCP maintains a variable CongestionWindow, which is an upper bound on the window size used.

The general strategy TCP uses is sometimes called additive increase/multiplicative decrease. On loss, TCP (eventually) reduces CongestionWindow by a factor of 2 (the multiplicative decrease). Each windowful of successful transmissions, on the other hand, causes CongestionWindow to be incremented by 1 (the additive increase). TCP is thus quick to reduce the window, but slow to grow it.

Actually, whenever TCP starts up cold or restarts (initially, or after a packet loss) it sets CongestionWindow to 1. It then enters the so-called "slow-start" phase, where the window size doubles for each round trip.

Study Questions

Exercises

14    Counting RTTs in slow start
15    Simple congestion-control example
20    TCP congestion at a router R
24    Detecting whether TCP implementations are behaving
25    Examining a TCP trace diagram
28    Sharing connections
39a  Two kinds of congestion (don't do 39b)