Computer Networks Week 12 Apr 22 Corboy Law 522
Review programs.
TCP Reassembler:
- description of files
- possibility of bad checksum fields
- PSocketPair
- max size of data stream: 10K (so a reassembly buffer of this size will always be sufficient)
Review Tahoe, Fast Recovery (Reno)
ns/nam demos
- basic1: one-way-propagation = 10, run-time = 8 sec, note
unbounded slow-start, bounded slow-start, congestion-avoidance. Loss at
~3.5 sec with complex recovery
- basic 1 variations 100 sec, bandwidth = 100 pkts/sec (so
theoretical max is 10,000 packets): queue size = 5: 8490 packets
queue size
|
throughput
|
3
|
7836
|
5
|
8490
|
10
|
9306
|
20
|
9512
|
50
|
9490
|
100
|
9667
|
- basic2: two connections sharing a link and competing for bandwidth. Demo. Note short-term lack of obvious fairness. Note also that when we increase the delay for the first link, while we expect a slow decline in the share of that link, in fact we get widely ranging values:
# delay bw1 bw2
# 10 1216 1339
# 12 1339 1216
# 14 1396 1151
# 16 1150 1396
# 18 1142 1396
# 20 1903 737
# 22 1147 1396
# 24 1156 1348
# 26 1002 1472
# 28 1337 1112
# 30 2106 608
# 40 948 1566
# 50 1033 1515
# 90 453 2066
#100 412 2127
#110 730 1877
#120 432 2121
#130 515 2038
#140 289 2472
#150 441 2065
#160 567 1820
#170 1265 1286
#180 530 1914
#190 490 1849
#200 446 1991
TCP Fairness
Example 1: SAME rtt; both connections get approximately equal allocations, as we saw above.
Example 2: different RTT
Classic version:
Connection 2 has twice the RTT of connection 1.
As before, we assume both connections lose a packet when cwin1+cwin2 > 10; otherwise neither loses.
Both start at 1
connection 1
|
1
|
2
|
3
|
4
|
5
|
6
|
7*
|
3
|
4
|
5
|
6
|
7
|
8*
|
4
|
5
|
6
|
7
|
8 |
9*
|
4
|
5
|
6
| 7
|
8
|
9*
|
...
|
connection 2
|
1
|
2
|
3
|
4*
|
2
|
3
|
4*
|
2
|
3
|
4*
|
2
|
3
|
4*
|
Conection2 averages half the window size. As the time it takes to send a window is doubled,
the throughput is down by a factor of FOUR.
What is really going on here? Is there something that can be "fixed"?
Early thinking (through the 90's?) was that there was.
Current thinking is to DEFINE what TCP does as "fair", and leave it at that.
TCP loss rate, p, versus cwin
cwnd = k/sqrt(p), bw = k/RTT*sqrt(p)
Explanation in terms of TCP sawtooth
tcp "friendliness": any transmission method that obeys this rule.
Note that the value of k is significant, though most estimates are
pretty much in the same range.
Let us assume we lose a packet at regular intervals, eg after every N
windowfuls, and that cwnd varies from M at the start of each window to
2*M at the end (thus reverting to M when we set cwnd = cwnd/2). Then,
in N windowfuls, cwnd was incremented N times, and so M+N = 2M and so
M=N.
This means we sent N + (N+1) + (N+2) + ... + 2N packets before a loss. This number is about 3/2N2. The loss rate is thus p = 1/(3/2)N2, and solving for N we get N = (2/3)0.5 * 1/sqrt(p). The average cwnd is 3/2N, so cwndaverage = 3/2*(2/3)0.5
/ sqrt(p) ≃ 1.225 / sqrt(p). More commonly in the literature we are
interested in the maximum cwnd; applying the same technique gives cwndmax = 2*(2/3)0.5 / sqrt(p) ≃ 1.633 / sqrt(p).
Another way to look at this is throughput = cwnd/RTT ≃ k/sqrt(p)*RTT, for k taken to be a constant in the range 1.2 to 1.5.
High-bandwidth TCP
consequence for high bandwidth: the cwnd needed implies a very small p; unrealistically small!
Random losses (not due to congestion) keep window significantly smaller than it should be.
TCP Throughput (Mbps)
|
RTTs between losses
|
cwnd
|
P (loss probability)
|
1
|
5.5
|
8.3
|
0.02
|
10
|
55
|
83
|
0.0002
|
100
|
555
|
833
|
0.000002
|
1000
|
5555
|
8333
|
0.00000002
|
10,000
|
55555
|
83333
|
0.0000000002
|
Table 1: RTTs Between Congestion Events for Standard TCP, for
1500-Byte Packets and a Round-Trip Time of 0.1 Seconds.
Packet Drop Rate P |
cwnd
|
RTTs between losses
|
10-2
|
12
|
8
|
10-3
|
38
|
25
|
10-4
|
120
|
80
|
10-5
|
379
|
252
|
10-6
|
1200
|
800
|
10-7
|
3795
|
2530
|
10-8
|
12000
|
8000
|
10-9
|
37948
|
25298
|
10-10
|
120,000
|
80000
|
Table 2: TCP window size in terms of drop rate
The above two tables indicate that large window sizes require extremely small drop rates. This is the highspeed-TCP problem: how do we maintain a large window? The issue is that non-congestive (random) packet losses bring the window size down, far below where it could be.
One proposed fix: HighSpeed-TCP: for each no-loss RTT, allow an inflation of cwnd by more than 1, at least for large cwnd. If the increment is N = N(cwnd), this
is equivalent to having N parallel TCP connections.
Congestion Window W |
Number N(W) of Parallel TCPs
|
1
|
1.0
|
10
|
1.0
|
100
|
1.4
|
1,000
|
3.6
|
10,000
|
9.2
|
100,000
|
23.0
|
Table 3: Number N(cwnd) of parallel TCP connections roughly emulated by the HighSpeed TCP response function.
The formula for N(cwnd) is largely empirical.
N(cwnd) = max(1.0, 0.23 × cwnd0.4)
Increased window size is not "smooth"
Note the second term in the max() above begins to dominate when cwnd = 38 or so
TCP Westwood
Keep continuous estimate of bandwidth, BWE (= ack rate * packet size)
BWE * RTTmin = min window size to keep bottleneck link busy
On loss, reduce cwnd to max(cwnd/2, BWE*RTTmin)
Classic sawtooth, TCP Reno
cwin alternates between cwin_min and cwin_max = 2*cwin_min.
cwin_max = transit_capacity + queue_capacity
If transit_capacity < cwin_min, then Reno does a pretty good job keeping the bottleneck link saturated.
But if transit_capacity > cwin_min, then when Reno drops to
cwin_min, the bottleneck link is not saturated until cwin climbs to
transit_capacity. Westwood: on loss, cwin drops to transit_capacity, a
smaller reduction.
What about random losses?
Reno: on random loss, cwin = cwin/2
Westwood: On random loss, drop back to transit_capacity.
If cwin < transit_capacity, don't drop at all!
TCP Friendliness
Suppose we are sending audio data in a congested environment. Because
of the real-time nature of the data, we can't wait for lost-packet
recovery, and so must use UDP rather than TCP. (Actually, we could use TCP unless the data is interactive;
that is, we can perfectly well use TCP to receive streaming audio
broadcasts. And if it's interactive, it's likely 8KB/sec voice, where
rate adjustment is impractical. Maybe video would be a better example?)
We suppose we can adjust the transmission rate as needed, but would
like to keep it relatively high.
How are we to manage congestion? How are we to maximize bandwidth without treating other connections unfairly?
A further problem with TCP is the sawtooth variation in cwnd (leading to at least some sawtooth variation in throughput). We don't want that.
TFRC, or TCP-Friendly Rate Control, uses the loss rate experienced, p, and the formulas above to calculate
a sending rate. It then allows sending at that rate. As the loss rate
increases, the sending rate is adjusted downwards, and so on. However,
adjustments are done more smoothly than with TCP.
From RFC 5348:
TFRC is designed to be reasonably fair
when competing for bandwidth with TCP flows, where we call a flow
"reasonably fair" if its sending rate is generally within a factor of two
of the sending rate of a TCP flow under the same conditions. [emphasis
added; a factor of two might not be considered "close enough" in some
cases.]
The penalty of having smoother throughput than TCP while competing
fairly for bandwidth is that TFRC responds slower than TCP to changes
in available bandwidth.
The TFRC receiver is charged with sending back feedback packets,
which serve as (partial) acknowledgements, and also include a
receiver-calculated value for the loss rate, over the previous RTT. The
TFRC receiver might send back only one such packet per RTT.
The actual response protocol has several parts, but if the loss rate increases, then the new sending rate should decrease to
min (calculated rate (based on p), 85% of the former rate)
Satellite Internet: web acceleration
Here the problem is that RTT is sooo large.