1 / 17

Y. Kodama, T. Kudoh, O. Tatebe , S. Sekiguchi Grid Technology Research Center

This study focuses on achieving stable network flow in high bandwidth-delay product networks by implementing smooth traffic shaping techniques on the GNET-1 hardware network testbed. Results show improved performance and reduced packet loss in transpacific networks. The approach involves adapting smooth traffic shaping to stabilize traffic in such networks.

Download Presentation

Y. Kodama, T. Kudoh, O. Tatebe , S. Sekiguchi Grid Technology Research Center

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Realization of a stable network flow with high performance communication in high bandwidth-delay product network Y. Kodama, T. Kudoh, O. Tatebe, S. Sekiguchi Grid Technology Research Center National Institute of Advanced Industrial Science and Technology (AIST)

  2. Outline • Background • What is a problem in a high bandwidth-delay product network • Smooth traffic shaping • Hardware network testbed GNET-1 • Experiments • Results of a network emulated by GNET-1 • Results of a transpacific network in BWC03 • Conclusion CHEP 2004

  3. Peak 1Gbps RTT RTT RTT > 2.4 Gbps Background • Why traffic on a high bandwidth-delay product network is not stable ? But sometimes packets are lost ! 500Mbps Stream A 500Mbps Stream B 1.5Gbps < 2.4Gbps 500Mbps Stream C 2.4Gbps network TCP has software pacing by self clocking of ACK packet, but it is not always effective. CHEP 2004

  4. Smooth traffic shaping • Limit the bandwidth of each stream to 1/n of bottleneck line rigidly by adjusting IFG (Inter Frame Gap) • Adjusting IFG is very smoothly limit the stream bandwidth, we realize it on hardware network testbed GNET-1 1Gbps Average 500Mbps 500Mbps 0Mbps IFG = Frame Len. Frame CHEP 2004

  5. Peak 1Gbps RTT RTT RTT Adapting smooth traffic shaping • Traffic on a high bandwidth-delay product networkbecome stable. GNET-1 500Mbps Stream A < 2.4Gbps GNET-1 500Mbps Stream B 1.5Gbps GNET-1 500Mbps Stream C Network CHEP 2004

  6. The Look of GNET-1 GNET-1 GNET-1 Control SNMP Agent Width:19inch, Height:1U(1.7inch) GBIC: 4 ports USB CHEP 2004

  7. Block Diagram of GNET-1 via GBIC I/F CHEP 2004

  8. GNET-1 GNET-1 GNET-1 GNET-1 GNET-1 Usage of GNET-1 • Emulation • a delay, bit error rate, output bandwidth, buffer control, etc. • Measurement • Precise network statistics • input/output bandwidth in every 100 microsecond. • Synchronize the local clock using GPS. • New Protocol prototype • Proposing protocol in feasibility study. Internet CHEP 2004

  9. Outline • Background • What is a problem in a high bandwidth-delay product network • Smooth traffic shaping • Hardware network testbed GNET-1 • Experiments • Results of a network emulated by GNET-1 • Results of a transpacific network in BWC03 • Conclusion CHEP 2004

  10. Network emulated by GNET-1 PC3 PC4 PC1 PC2 • PC1: iperf –c PC3 –w 8M • PC2: iperf –c PC4 –w 8M • increase sockbuff limit • Standard TCP/IP with WADIFQ option by Web100 GNET-1 • Smooth traffic shaping • 250Mbps for each stream • Finegrain measurement • 2ms interval for bandwidth SW GNET-1 SW Emulate bottleneck network • Bottleneck one way delay (100ms) • Bottleneck Bandwidth (500 Mbps) • Bottleneck Buffer size (512KB) CHEP 2004

  11. Effects of traffic shaping No traffic shaping: traffic shaping: 250Mbps each bottleneck: one-way delay 100ms, 500Mbps, Buffer size 512KBytes CHEP 2004

  12. Transpacific network in BWC03 Bandwidth Challenge in SC'03 Computer Fabrics in CHEP 04 Trans-Pacific Gfarm Datafarm testbed 147 nodes 16 TBytes 4 GB/s SuperSINET Indiana Univ Titech Trans-Pacific thoretical peak 3.9 Gbps Gfarm disk capacity 70 TBytes disk read/write 13 GB/sec SuperSINET NII 2.4G 10 nodes 1 TBytes 0.3 GB/s Abilene Univ Tsukuba 32 nodes 23 TBytes 2 GB/s NY 2.4G(1G) KEK 7 nodes 4 TBytes 0.2 GB/s OC-12 ATM 500M Chicago Tsukuba WAN APAN Tokyo XP Maffin 16 nodes 12 TBytes 1 GB/s SC2003 Phoenix AIST LA 2.4G 16 nodes 12 TBytes 1 GB/s APAN/TransPAC SDSC CHEP 2004

  13. SW PC 1G GNET-1 PC 1G 1G PC 1G SW PC 1G GNET-1 1G PC 1G SW PC 1G GNET-1 1G PC 1G Environment SuperSINET NewYork 1G 285ms SW 1G / shaping APAN/TransPAC Chicago 500M 250ms 1G / shaping 10G • 11 PC on both ends, LA line was divided to 3 link • HighSpeedTCP/IP with WADIFQ, MTU size : 6000 APAN/TransPAC LosAngeles 2.4G 141ms 1G / shaping CHEP 2004

  14. Smooth traffic shaping (results of BWC03) 950 Mbps in NY (+20) 500 Mbps in Chicago 800 Mbps in LA3 750 Mbps in LA2 780 Mbps in LA1 (-20) 930 Mbps in NY 500 Mbps in Chicago 800 Mbps in LA3 750 Mbps in LA2 800 Mbps in LA1 • Achieved stable 3.78Gbps Disk to Disk data transfer on 3.9 Gbps,144ms long-fat network. • Currently the shaping bandwidth is defined by user, we will make automatic tuning facility. CHEP 2004

  15. Conclusion and Future Plan • Smooth traffic shaping of GNET-1 realizes stable network traffic on a high bandwidth-delay product network. • Automatic tuning of the bandwidth of each stream is a next challenge. • Please refer to http://www.gtrc.aist.go.jp/gnet/ about details of GNET-1. • We are also developing a software pacing method in network driver. • We are now developing a new tool for 10GbE. CHEP 2004

  16. Photograph of GNET-10 19 inch rack mountable 2U height FPGA:XC2VP75 x 2 Memory: 1GByte x 2 10GbE: LR 2 ports GbE: GBIC 2ports CHEP 2004

  17. Sockbuf and WADIFQ effects on a stream Measurement every 1ms Bottleneck line: 100ms, 1Gbps, 16MB no packet loss on a network WADIFQ: full of IFQ counted as no congestion, same effect as setting IFQ large Required sockbuf size : 100ms * 2 * 1Gbps = 25MB CHEP 2004

More Related