1 / 11

Enabling Large Scale Network Simulation with 100 Million Nodes using Grid Infrastructure

Enabling Large Scale Network Simulation with 100 Million Nodes using Grid Infrastructure. Hiroyuki Ohsaki Graduate School of Information Sci. & Tech. Osaka University, Japan oosaki@ist.osaka-u.ac.jp. 1. State Goals and Define the System. Goals

paytah
Download Presentation

Enabling Large Scale Network Simulation with 100 Million Nodes using Grid Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Enabling Large Scale Network Simulation with 100 Million Nodes using Grid Infrastructure Hiroyuki Ohsaki Graduate School of Information Sci. & Tech. Osaka University, Japan oosaki@ist.osaka-u.ac.jp

  2. 1. State Goals and Define the System • Goals • Show effectiveness of our network model partitioning, TI-PART • Compare performance of TI-PART with random partitioning • Find optimal parameter configuration of TI-PART on ``alpha’’ • System Definition • Parallel distributed simulation environment including... • Computing resources running PDNS version 2.27-v1a • Networking resources • Network model partitioning software implementing TI-PART

  3. 2. List Services and Outcomes • Services Provided • Discrete-event simulation for a given network model and traffic matrix • Outcomes • Simulation event log for each packet processing

  4. 3. Select Metrics • Speed (case of successful service case) • Individual • None • Global • Time required for completing all simulation events • Reliability (case of error) • None • Availability (case of unavailability) • None

  5. 4. List Parameters • System parameters • Networking resource related • Topology • Link bandwidth, latency, loss ratio • Queue size and disciplie (e.g., DropTail or RED) • Computing resource related • # of resources • CPU type, speed, # of CPUs • Memory size, speed • TI-PART related • Balancing factor ``alpha’’ • Workload parameters • Network model • Topology, # of nodes, degree • Link bandwidth, latency, loss ratio • Queue size and disciplie (e.g., DropTail or RED) • Traffic matrix

  6. 5. Select Factors to Study • System parameters • Networking resource related • Topology • Link bandwidth, latency, loss ratio • Queue size and disciplie (e.g., DropTail or RED) • Computing resource related • # of resources (1, 2, 4, 6, 8) • CPU type, speed (use cpufreq?), # of CPUs • Memory size, speed • TI-PART related • Balancing factor ``alpha’’ (0.1, 0.2, 0.3, 0.4, 0.5) • Workload parameters • Network model • Topology, # of nodes, degree • Link bandwidth, latency, loss ratio • Queue size and disciplie (e.g., DropTail or RED) • Traffic matrix

  7. 6. Select Evaluation Technique • Use analytical modeling? • No • Use simulation? • No • Use measurement of real system? • Yes

  8. 7. Select Workload • Generate random network model with the following parameters • # of nodes • 10, 100, 1000, 10000, 100000 • degree • 2, 2.5, 3, 3.5, 4 • Link bandwidth • Uniformly distributed between 1 and 10, 100, 1000 Mbps • Link latency • Uniformly distributed between 0.1 and 1, 10, 100 ms • Traffic matrix • # of TCP flows: # of nodes × 0.1, 1, 10

  9. 8. Design Experiments • ?

  10. 9. Analyze and Interpret Data • ?

  11. 10. Present Results • Running time vs. # of computing resources for different... • # of nodes, degree • Link bandwidth and latency • Traffic matrix

More Related