1 / 29

PERFORMANCE DIFFERENTIATION OF NETWORK I/O in XEN

by Kuriakose Mathew (08305062) ‏ under the supervision of Prof. Purushottam Kulkarni and Prof. Varsha Apte. PERFORMANCE DIFFERENTIATION OF NETWORK I/O in XEN. Outline. Introduction Need for Network I/O differentiation in Xen

Download Presentation

PERFORMANCE DIFFERENTIATION OF NETWORK I/O in XEN

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. by Kuriakose Mathew (08305062)‏ under the supervision of Prof. Purushottam Kulkarni and Prof. Varsha Apte PERFORMANCE DIFFERENTIATION OF NETWORK I/O in XEN

  2. Outline • Introduction • Need for Network I/O differentiation in Xen • Implementation of the Tool for Network I/O configuration in Xen • Validation of the Tool • Conclusion and Future work

  3. Virtualization • Virtualization – Abstraction of resources Virtualized system Single OS system Virtualization[http://software.intel.com]

  4. Need for Performance Differentiation Client2 Client1 Virtualization[http://software.intel.com]

  5. Xen- Open Source VMM Dom-U Dom-0 VMM Hardware Xen Architecture [David Chisnall.The Definitive Guide]

  6. Performance Differentiation in Xen Tools exist for configuration CPU credit scheduler • Weights – share of the physical CPU time that the domain gets • Caps – maximum physical CPU time that the domain gets • Weights – relative eg 128:256 256:512 • Caps – absolute 20% 50% • Eg xm sched-credit –d <domain_name> -w 256 –c 20 No means of configuration no weighted scheduling Provides fair scheduling

  7. Network I/O Differentiation in Xen 25% CPU, 2MBps Domain 1 35% CPU, 2MBps Domain 2 • Dom1 – 20% CPU limit • Dom2 – 80% CPU limit 25% CPU, 2MBps Domain 1 35% CPU, 5MBps Domain 2 Dom1 – 20% CPU limit Dom2 – 80% CPU limit • With existing methods in Xen, network bandwidth utilization cannot be controlled Need for a separate control mechanism for Network I/O Differentiation

  8. Previous Work at IIT B • DDP 08-09 work proposed and implemented a weighted Network I/O scheduler providing bandwidth limits and guarantees. • Implemented bandwidth limits and guarantees for VMs in Xen • limits – Maximum bandwidth usage for a VM • guarantees – Amount of available bandwidth that a VM is guaranteed

  9. Short Comings of Previous Work • Shortcomings • Hard coding of values in Netback driver • Recompilation of kernel for change of parameter • Lack of dynamic configuration tool • Difficulty in doing experimentation and validation • Issues with interference of CPU scheduler • Complex sharing of credit values

  10. Overall MTP Goal • Design and implement a tool for specification and dynamic reconfiguration of limits for Network I/O in Xen • Simplification of the existing algorithm to provide bandwidth guarantees to Network I/O • Study the interference of Xen CPU scheduler with the Network I/O configuration • Validation with realistic I/O intensive applications

  11. High-level Specification of the Tool • Dynamically specify the bandwidth usage limits and guarantees of a virtual machine • Usage • Xmsetbw –d <domain_name> -g <guarantee> -l <limits> • domain_name - name of the domain whose bandwidth parameters need to be set • guarantee – the guarantee in bandwidth provided to the domain • limit – the maximum limit in bandwidth usage for the domain

  12. Xen Device Driver Model Tool input Front-end Back-end Xen device driver model [David Chisnall. The Definitive Guide]

  13. Packet Transmission in Xen Xenstore Packet Transmission/Reception [Sriram Govindan. Xen and co.: Communication-aware CPU Scheduling]

  14. Xenstore and Xenbus • Xenstore • Exchange out of band information • Hierarchical filesystem like database used for sharing small amount of info between domains • Contains 3 main paths • /vm • /local/domain • /tools

  15. Xenbus • Interface for Xenstore VMs XENSTORE USERSPACE XENBUS

  16. Xenbus(cont.) • Provide APIs for reading and writing data • Support Transactions for group execution of operations • Watches • Associated with a key in xenstore • Change in value causes a call-back to be triggered • User-kernel interaction • register_xenbus_watch • register_xenstore_notifier

  17. Implementation of Tool for Dynamic Network Configuration User input Command Userspace tool Domain Name Domain id Xenstore Write Format string and write to predefined-path

  18. Implementation of Tool (cont.) NETBACK XENBUS Register initial Call-back Read initial value Register Call-back1 Xenstore Register Call-back2 If structures modified Register pre-defined path callback Call-back1 Y N Start-up Read new value Read path Update variable Call-back2 Xenstore write Transmit packets Array of structures and flags

  19. Modification in Netback driver • Credit scheduling in the netback driver • By default, network scheduler assigns equal credits • Fair sharing of bandwidth • Modified to assign weighted credits Network Scheduler[ddp]

  20. Can the bandwidth limit for each domain be set for step increase in bandwidth limit? C1- All doms 1MBps C2- dom4 2MBps C3 - dom3 2MBps dom4 3MBps C4 - dom2 2MBps dom3 3MBps dom4 4MBps 3.87 2.91 1.92 0.96 Bandwidth is limited to within 96.4%

  21. Can the bandwidth limit for each domain be set for step decrease in bandwidth limit? 3.87 C1 –dom1 4 MBps dom2 3MBps dom3 2MBps dom4 1MBps C2 –dom1,2 3MBps dom3 2MBps dom4 1MBps C3- dom1,2,3 2MBps dom4 1MBps C4- All doms 1MBps 2.91 1.92 0.96 Bandwidth is limited to within 96.4%

  22. How is the bandwidth shared for domains which are not bandwidth limited? C1 –No limit set C2 –dom5 1MBps C3- dom5 2MBps 1.93 0.95 Bandwidth is fairly shared for domains that are not bandwidth limited

  23. How does the bandwidth change with time for a step change in limit for a single domain? Bandwidth settles in 6.7 sec (on an avg) for a step change

  24. How does the bandwidth change with time for a step change in limit for multiple domains? Bandwidth limits set for multiple domains independently

  25. How does the CPU utilization in DomU and Dom-0 vary for different bandwidth limits for a DomU? 206.6 193.4 195.2 190.3 182.3 169.6 144.7 54.5 12.4 11.4 9.7 8.1 6.7 4.7 3.2 1.6 DomU utilization increases proportionally While Dom-0 utilization does not

  26. Summary of Work Done • Implemented a tool for specification and dynamic reconfiguration of bandwidth limits for VMs in Xen • Modified the Netback driver of Xen to provide bandwidth limits • Experimentation and Validation to verify the bandwidth limits configuration

  27. Implementation Problems • Compilation, Installation & Debugging of Xen and domains • Code Exploration to understand Xen network I/O model. • Not much documentation available • Xenstore generally used for guest domain to domain-0 • Direct calling of xenbus API in Netback were giving problems • 2 Levels of call-back needed

  28. Future Work • Simplification of the algorithm to implement the bandwidth guarantee • Study the effect of CPU scheduler on the bandwidth limits and guarantee • Validation with realistic I/O intensive applications

  29. THANK YOU

More Related