100 likes | 115 Views
Learn how auto-tuning TCP buffer sizes dynamically improves transfer rates without manual tuning, presented by the Cal Poly Network Performance Research Group. Discover the benefits, features, and implementation of auto-tuning to enhance network efficiency.
E N D
Automatic TCP Buffer Tuning Jeffrey Semke, Jamshid Mahdavi & Matthew Mathis Presented By: Heather Heiman Cal Poly Network Performance Research Group
Problem • A single host may have multiple connections at any one time, and each connection may have a different bandwidth. • Maximum transfer rates are often not achieved on each connection. • To improve transfer rates, systems are often manually tuned, but this requires an expert or system administrator. Cal Poly Network Performance Research Group
Problem • If systems are manually tuned, TCP performance can still suffer because some connections will exceed the bandwidth-delay product, while other connections will be below the bandwidth-delay product. • “The bandwidth-delay product is the buffer space required at sender and receiver to obtain maximum throughput on the TCP connection over the path.” Cal Poly Network Performance Research Group
Auto-Tuning • Auto-Tuning is the dynamic sizing of the bandwidth-delay product. • It is based upon network conditions and system memory availability. • Before implementing auto-tuning, the following features should be used: • TCP Extensions for High Performance • TCP Selective Acknowledgement Options • Path MTU Discovery Cal Poly Network Performance Research Group
Auto-Tuning Implementation • The receive socket buffer size is set to be the operating system’s maximum socket buffer size. • The size of the send socket buffer is determined by three algorithms. • The first is based on network conditions. • The second balances memory usage. • The third sets a limit to prevent excessive memory use. Cal Poly Network Performance Research Group
Types of Connections • default: connection type used the NetBSD 1.2 static default socket buffer size of 16kB • hiperf: connection type was hand-tuned for performance to have a static socket buffer size of 400kB, which was adequate for connections to the remote receiver. It is overbuffered for local connections. • auto: connections used dynamically adjusted socket buffer sizes according to the implementation described in section 2 of the paper Cal Poly Network Performance Research Group
Testing Results Only one type of connection was run at any one time to correctly examine the performance and memory usage of each connection type. Cal Poly Network Performance Research Group
Testing Results Concurrent data transfers were run from the sender to both the remote receiver and the local receiver. Cal Poly Network Performance Research Group
Remaining Issues • In some implementations of TCP, the cwnd is allowed to grow even when the connection is not controlled by the congestion window causing the dynamically sized send buffers to unnecessarily expand, wasting memory. • Allowing large windows in TCP could cause a slow control-system response due to the long queues of packets. Cal Poly Network Performance Research Group
Conclusion • TCP needs to be able to use resources more efficiently in order to keep connections from starving other connections of memory. • Auto-Tuning does not allow a connection to take more than its fair share of bandwidth. Cal Poly Network Performance Research Group