Why Reacting to Latency is Important for Congestion Control
Posted on April 13, 2020 by Taran Lynn

Recently I was running network tests at home. My setup include my desktop and laptop connected via a 1 Gbps link via a switch. I wanted to emulate a network which had an infinite buffer at it’s bottleneck. To achieve this I setup an iperf3 server on my laptop, and then configured NetEm with the following parameters on my desktop.

> sysctl net.core.rmem_default=2147483647
> sysctl net.core.rmem_max=2147483647

> sysctl net.core.wmem_default=2147483647
> sysctl net.core.wmem_max=2147483647

> sysctl net.ipv4.tcp_rmem="2147483647 2147483647 2147483647"
> sysctl net.ipv4.tcp_wmem="2147483647 2147483647 2147483647"

> run tc qdisc replace dev DEV root netem delay 30ms limit 115

Here DEV corresponds to my desktop’s network interface. Next I ran the following test for TCP Reno, Cubic, Vegas, and BBR. On the laptop side

> iperf3 -s -1 --json > $CC_server.json

On the desktop side

> sudo sysctl net.ipv4.tcp_congestion_control=$CC
> iperf3 -c $LAPTOP -t 120 --json -P2 > $CC_client.json

Note that I’m using two flows here. With one flow the algorithm’s would stop increasing before reaching a congested state. I’m not sure why this is, but nevertheless with two flows we get more interesting results. The data can be found here.

Analyzing the Data

To analyze the data I used a Python script to plot the send CWND, throughput, and RTT for each of the tests. Here are the results. Note that no flows sustained losses, so they were not plotted. Also, for some reason the throughput measurements were significantly delayed on the client side, so the throughput measurements used are from server side.

Reno

Cubic

Vegas

BBR

High RTTs are probably related to cwnd_gain.