1 / 37

Inferring the Topology and Traffic Load of Parallel Programs in a VM environment

Inferring the Topology and Traffic Load of Parallel Programs in a VM environment. Ashish Gupta Peter Dinda Department of Computer Science Northwestern University. Overview. Motivation behind parallel programs in a VM environment Goal: To infer the communication behavior

neviah
Download Presentation

Inferring the Topology and Traffic Load of Parallel Programs in a VM environment

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inferring the Topology and Traffic Load of Parallel Programs in a VM environment Ashish Gupta Peter Dinda Department of Computer Science Northwestern University

  2. Overview • Motivation behind parallel programs in a VM environment • Goal: To infer the communication behavior • Offline implementation • Evaluating with parallel benchmarks • Online Monitoring in a VM environment • Conclusions

  3. Virtuoso: A VM based abstraction for a Grid environment

  4. Motivation • A distributed computing environment based on Virtual Machines • Raw machines connected to user’s network • Our Focus: Middleware support to hide the Grid complexity

  5. Motivation • A distributed computing environment based on Virtual Machines • Raw machines connected to user’s network • Our Focus: Middleware support to hide the Grid complexity • Our goal here: Efficient execution of Parallel applications in such an environment

  6. Parallel Application Behavior Intelligent Placement and virtual networking of parallel applications Virtual Networks With VNET VM Encapsulation

  7. VNET • Abstraction: A set of VMs on same Layer 2 network • Virtual Ethernet LAN

  8. Goal of this project Application Topology An online topology inference framework for a VM environment ? Low Level Traffic Monitoring

  9. Approach Design an offline framework Evaluate with parallel benchmarks If successful, design an online framework for VMs

  10. An offline topology inference framework Goal: A test-bed for traffic monitoring and evaluating topology inference methods

  11. The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization

  12. The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization

  13. The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization

  14. The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization

  15. The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization PVMPOV Inference

  16. Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization Infer.pl

  17. Parallel Benchmarks Evaluation Goal: To test the practicality of low level traffic based inference

  18. Parallel Benchmarks used 1 2 3 • Synthetic benchmarks: Patterns • N-dimensional mesh-neighbor • N-dimensional toroid-neighbor • N-dimensional hypercubes • Tree reduction • All-to-All • Scheduling mechanism to generate deadlock free and efficient schemes

  19. Application benchmarks • NAS PVM benchmarks • Popular benchmarks for parallel computing • 5 benchmarks • PVM-POV : Distributed Ray Tracing • Many others possible… • The inference not PVM specific • Applicable to all communication . • e.g. MPI, even non-parallel apps

  20. Patterns application 3-D Toroid 3-D Hypercube 2-D Mesh Reduction Tree All-to-All

  21. PVM NAS benchmarks Parallel Integer Sort

  22. Traffic Matrix for PVM IS benchmark

  23. Traffic Matrix for PVM IS benchmark Placement of host1 is crucial on the network

  24. An Online Topology Inference Framework: VTTIF Goal: To automatically detect, monitor and report the global traffic matrix for a set of VMs running on a overlay network

  25. Overall Design • VNET • Abstraction: A set of VMs on same Layer 2 network • Virtual Ethernet LAN

  26. A VNET virtual layer A Virtual LAN over wide area VNET Layer Physical Layer

  27. Overall Design • VNET • Abstraction: A set of VMs on same Layer 2 network • Extend VNET to include the required features • Monitoring at Ethernet packet level • The Challenge here • Lacks manual control • Detecting interesting parallel program communication ?

  28. Detecting interesting phenomenon • Certain address properties • Based on Traffic rate • Etc. Provide support for queries by external agent Rate based monitoring Non-uniform discrete event sampling What is the Traffic Matrix for the last n seconds ? Reactive Mechanisms Proactive Mechanisms Like a Burglar Alarm Video Surveillance

  29. Physical Host VM VNET daemon VNET overlay network Traffic Analyzer Rate based Change detection Traffic Matrix Query Agent To other VNET daemons VM Network Scheduling Agent

  30. Traffic Matrix Aggregation • Each VNET daemon keeps track of local traffic matrix • Need to aggregate this information for a global view • When the rate falls, the local daemons push the traffic matrix (When do you push the traffic matrix ?) • Operation is associative: reduction trees for scalability The proxy daemon

  31. Evaluation • Used 4 Virtual Machines over VNET • NAS IS benchmark

  32. Conclusions A Traffic Inference Framework for Virtual Machines Ready to move on to future steps: Adaptation for Performance Possible to infer the topology with low level traffic monitoring

  33. Current Work • Capabilities for dynamic adaptation into VNET • Spatial Inference Network Adaptation for Improved Performance • Prelim Results: Improved performance upto 40% in execution time • Looking into benefits of Dynamic Adaptation

  34. For more information • http://virtuoso.cs.northwestern.edu • VNET is available for download • PLAB web site: plab.cs.northwestern.edu

More Related