Do we still need to worry about long fat pipes?

I am trying to settle a controversy we have at work.

A decade ago TCP used to have really bad performance over long fat pipes, i.e., network paths that feature high bandwidth-delay product. Sysadmins used to tune the TCP stack with some "golden" sysctl values that would magically increase iperf transfer rates to Gbps.

In the meantime, many things have happened. Looking at Linux, it added support for TCP timestamps, buffer auto-tuning, RTT measurement, CUBIC congestion control, SACK, to name a few that I am aware of.

Do we still need to tune the TCP stack for long fat pipes or do today's users get Gbps transfer rates out-of-the-box?

2

1 Answer

No need anymore, unless it's REALLY long (like satellite endpoints). This is all handled dynamically and effectively. In most cases all you might need is to use a large MTU for storage networks, etc.

3

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

You Might Also Like