Your second Mininet project is to compare the throughput of TCP Cubic with itself, for various combinations of greater delay for the second connection. You are to use the Mininet python file comp_delay2.py, which sets up the following topology:
+----h1----+
| |
h4----s1 r ----
h3
| |
+----h2----+
The h1--r delay is zero, and the r--h3 delay is 100ms. You should try the following values for the h2--r delay:
Do this for the existing value of QUEUE, which is 200, and also for a larger value of QUEUE, say 600. Is there a difference?
The existing value for BottleneckBW is 12 mbit, which works out to a bandwidth x delay product on the h1--h3 path of 100 packets.
You will need a reasonably large blockcount; I recommend trying 200,000 (athough you might want to start with 100,000 or even 50,000 to get a feel of whether everything is working without having to wait so long. This blockcount value will be entered into the sender1.py/sender2.py files, or else on the command line. It might be easiest to make two separate sender.py files, eg sender1.py and sender2.py. If you use the command line, the full command line for sender.py for the TCP Reno connection is (note the use of the full path):
/home/mininet/loyola/cubic/sender 200000 10.0.3.10 5430 cubic
The other connection will use port 5431.
To receive data, use the existing dualreceive2.py on h3. Start it like this, where the number on the command line is the same as the blockcount used by the two sender programs:
python3 dualreceive2.py 200000
When dualreceive2.py is done, it will close the connections. This may lead to a "ConnectionResetError" on the sending side. Ignore it.
The file comp_delay.py does an improved job of setting the queue size at r, and makes sure that all delays are in the direction towards h3. The outbound interface at r, r-eth3, needs to have delay, bandwidth and queue set. However the delay and queue settings interact, as the netem qdisc implements delay by storing packets in the queue. I've tried to reduce unwanted interactions between delay and queue by using a three-layer queuing discipline at r:root: netem with delay only (qdisc handle 1:)
middle: htb with bandwidth only (qdisc handle 10:, class is 10:1)
leaf: netem with queue only (qdisc handle 20:)
Outbound packets arrive and go to the leaf netem. They can be
withdrawn only at the rate as specified by the htb layer, so if too many
arrive, packets are dropped. This is exactly the behavior we want.
After packets are withdrawn from the leaf queue by the htb layer, consistent with the htb rate specification, they then enter the "long queue" created by the root netem, which implements delay. So, first queue, then rate, then delay.
You will start the two senders using ssh. To set up ssh, you need these steps:
After ssh is working without requiring passwords, start the two flows like this from h4:
ssh 10.0.0.1 /home/mininet/loyola/sender1.sh & ssh 10.0.0.2 /home/mininet/loyola/sender2.sh
where the scripts sender1.sh and sender2.sh have been set up to start sender (with full path specified!) with the parameters as above.
You can change the delay values in the Python program, and restart Mininet. Alternatively, it is possible to change the h2--r delay without stopping mininet: to change to 200 ms, run the following on h2:
tc qdisc change dev h2-eth handle 1: netem delay 200ms
You can see how it's set by running, on h2, tc -s qdisc show dev h2-eth.
A bare-bones competition is highly subject to TCP phase effects. This is usually addressed by the introduction of some randomizing traffic. This can be done using udprandomtelnet.py. You should verify the following variable values: BottleneckBW=40 (40 mbit/sec) and density=0.02 (2%) (since two connections are using the same h1--r link). The version of udprandomtelnet.py linked to here has these values preset.
I've switched to UDP because the small TCP packets were running together. With the values above, the mean gap between packets is something like 2.5ms, and TCP tends to combine packets sent within 5-10ms of one another.
The random-telnet solution is still imperfect; there is still a great deal of variability. But it's better than without.
If you're going to do this, you should run udprandomtelnet.py on h1 and on h2; the existing parameters should be good though you will have to specify the destination address:
python3 udprandomtelnet.py 10.0.3.10
To receive this traffic, run the following on h3 (5433 is the port
number, and we're redirecting output to /dev/null because we're ignoring
it)
netcat -l -u 5433 > /dev/null
I find it easiest to create a separate set of xterm windows on h, h2 and h3 for the purposes of running this.
Because TCP Cubic tends to be more resistant to phase effects than TCP Reno, you may get good results without this. Let me know if you used udprandomtelnet, however.
Get comparison results with DELAY2 = 0, 100, 200, 400ms and QUEUE=200 and 600. Because of the variability, you will need to do at least three runs for each combination.
DELAY2 | 0ms | 100ms | 200ms | 400ms |
QUEUE=200 | ||||
QUEUE=600 |
For each of the eight combinations, submit:
The claim is that cubic-cubic competition allocates bandwidth in inverse proportion to the RTTs; that is, with DELAY2=100ms, the second connection has twice the total RTT and so should have half the bandwidth. Does your data confirm this?