Test and Evaluation of HyStart, HyStart++, and SUSS

Experimental Setup

In this preliminary evaluation, we use a local testbed with the topology shown in Figure 1. Our single sender transmits data to Ethernet (PC) and Wi-Fi (laptop) receivers through two Linux routers. In the first experiment, where we evaluate wired network performance, we introduce a bottleneck using netem (see the purple link in the figure). However, when evaluating performance with the receiver connected via Wi-Fi, we disable netem, allowing the Wi-Fi connection itself to act as the bottleneck link.

In our performance comparison, we evaluate not only HyStart [1] and HyStart++ [2] but also include SUSS [3]. HyStart and HyStart++ enhance the slow-start phase by moderating the exponential growth of cwnd to not overshoot the path capacity, whereas SUSS improves delivery rates during the initial RTTs of slow start. It achieves this by accelerating cwnd growth when it is significantly below its optimal value and the TCP flow utilizes less than its fair share of the bottleneck link. For further details on SUSS, refer to [3] and on GitHub.

Figure 1: Local testbed used for the evaluation.

Performance When the Bottleneck Link is Wired

Using netem, we configure the bottleneck bandwidth to 50 Mbps, the round-trip propagation delay to 100 ms, and the bottleneck buffer size to 1 BDP = 413 packets. In this experiment, the receiver (PC) receives data from the sender via a TCP flow.
We repeat the experiment four times:

1) None of the mechanisms is enabled:

In the first iteration, HyStart, HyStart++, and SUSS are all disabled. As shown in Figure 2(a), cwnd grows exponentially until packet loss occurs, leading to heavy buffer overflow. The receiver experiences a long delay in receiving new acknowledgments (see the cut in the green curve in Figure 2(a)). The 5MB data transfer takes 1.5 seconds, and for 20MB, 1105 packets are lost.

2) HyStart Enabled

In the second iteration, HyStart is enabled. As shown in Figure 2(b), cwnd growth shifts from exponential to linear once the bottleneck link is saturated. This delays and moderates buffer overflow, reducing packet loss to 40 packets for 20MB of data transfer. The 5MB transfer time improves to 1.29 seconds.

3) HyStart++ Enabled

In the third iteration, HyStart++ is enabled. As shown in Figure 2(c), the exponential growth is divided into two phases: SS (slow-start) and CSS (conservative slow-start), where cwnd growth slows down. However, the transition occurs late and remains aggressive, causing significant packet loss. The 5MB transfer takes 1.5 seconds, and 704 packets are lost for 20MB.

4) SUSS Enabled

In the fourth iteration, SUSS is enabled. As shown in Figure 2(d), the accelerated exponential growth transitions to linear growth once the bottleneck link is saturated, delaying and moderating buffer overflow. This results in only 35 lost packets for 20MB of data transfer. The 5MB transfer time is the shortest at 1.11 seconds.
The network conditions in this configuration are stable, and repeated experiments produce consistent results.

Figure 2: Dynamics of cwnd, packets in flight, and total data delivered (Wired scenario).

Performance When the Bottleneck Link is Wi-Fi

For this experiment, we remove the 50 Mbps limit from the previous setup, making the Wi-Fi link the bottleneck. However, we use netem to shape a 50 ms round-trip propagation delay on the wired section of the connection. The Wi-Fi link adds 3 to 5 ms, resulting in a total round-trip propagation delay of 53 to 55 ms.
In this test the laptop receives data from the sender via a TCP flow. As shown in Figure 3, HyStart++ extends exponential growth longer than SUSS and HyStart. This results in increased queuing delay and packet loss without improving the delivery rate.

Figure 3: Dynamics of cwnd, packets in flight, and total data delivered (Wi-Fi scenario).

Due to the inherent instability of Wi-Fi links, repeating the experiment does not produce highly consistent results. To provide better insights, we repeat each test 20 times per mechanism and aggregate the results to compare Flow Completion Time (FCT) and packet loss for different flow sizes. This allows us to analyze how these parameters vary among the evaluated mechanisms too.
As mentioned, when none of the mechanisms is enabled, cwnd overshoots the path capacity, causing significant packet loss (see the green curve in Figure 4(b)). This, in turn, increases FCT, particularly for small size flows (see the green curve in Figure 4(a)). In this configuration, while HyStart++ offers some improvement, SUSS and HyStart still achieve better performance due to their lower packet loss rates.
Figure 4: Comparison of FCT and packet loss across HyStart, HyStart++, and SUSS for different data transfer sizes.

References

[1] S. Ha, I. Rhee, and L. Xu, CUBIC: a new TCP-friendly high-speed TCP variant, ACM SIGOPS operating systems review, vol. 42, no. 5, pp. 64, 2008.
[2] Balasubramanian, P., Huang, Y. and Olson, M., 2023. RFC 9406 HyStart++: Modified Slow Start for TCP.
[3] Arghavani, M., Zhang, H., Eyers, D. and Arghavani, A., 2024, August. SUSS: Improving TCP Performance by Speeding Up Slow-Start. In Proceedings of the ACM SIGCOMM 2024 Conference (pp. 151-165).