290 likes | 401 Views
by Kuriakose Mathew (08305062) under the supervision of Prof. Purushottam Kulkarni and Prof. Varsha Apte. PERFORMANCE DIFFERENTIATION OF NETWORK I/O in XEN. Outline. Introduction Need for Network I/O differentiation in Xen
E N D
by Kuriakose Mathew (08305062) under the supervision of Prof. Purushottam Kulkarni and Prof. Varsha Apte PERFORMANCE DIFFERENTIATION OF NETWORK I/O in XEN
Outline • Introduction • Need for Network I/O differentiation in Xen • Implementation of the Tool for Network I/O configuration in Xen • Validation of the Tool • Conclusion and Future work
Virtualization • Virtualization – Abstraction of resources Virtualized system Single OS system Virtualization[http://software.intel.com]
Need for Performance Differentiation Client2 Client1 Virtualization[http://software.intel.com]
Xen- Open Source VMM Dom-U Dom-0 VMM Hardware Xen Architecture [David Chisnall.The Definitive Guide]
Performance Differentiation in Xen Tools exist for configuration CPU credit scheduler • Weights – share of the physical CPU time that the domain gets • Caps – maximum physical CPU time that the domain gets • Weights – relative eg 128:256 256:512 • Caps – absolute 20% 50% • Eg xm sched-credit –d <domain_name> -w 256 –c 20 No means of configuration no weighted scheduling Provides fair scheduling
Network I/O Differentiation in Xen 25% CPU, 2MBps Domain 1 35% CPU, 2MBps Domain 2 • Dom1 – 20% CPU limit • Dom2 – 80% CPU limit 25% CPU, 2MBps Domain 1 35% CPU, 5MBps Domain 2 Dom1 – 20% CPU limit Dom2 – 80% CPU limit • With existing methods in Xen, network bandwidth utilization cannot be controlled Need for a separate control mechanism for Network I/O Differentiation
Previous Work at IIT B • DDP 08-09 work proposed and implemented a weighted Network I/O scheduler providing bandwidth limits and guarantees. • Implemented bandwidth limits and guarantees for VMs in Xen • limits – Maximum bandwidth usage for a VM • guarantees – Amount of available bandwidth that a VM is guaranteed
Short Comings of Previous Work • Shortcomings • Hard coding of values in Netback driver • Recompilation of kernel for change of parameter • Lack of dynamic configuration tool • Difficulty in doing experimentation and validation • Issues with interference of CPU scheduler • Complex sharing of credit values
Overall MTP Goal • Design and implement a tool for specification and dynamic reconfiguration of limits for Network I/O in Xen • Simplification of the existing algorithm to provide bandwidth guarantees to Network I/O • Study the interference of Xen CPU scheduler with the Network I/O configuration • Validation with realistic I/O intensive applications
High-level Specification of the Tool • Dynamically specify the bandwidth usage limits and guarantees of a virtual machine • Usage • Xmsetbw –d <domain_name> -g <guarantee> -l <limits> • domain_name - name of the domain whose bandwidth parameters need to be set • guarantee – the guarantee in bandwidth provided to the domain • limit – the maximum limit in bandwidth usage for the domain
Xen Device Driver Model Tool input Front-end Back-end Xen device driver model [David Chisnall. The Definitive Guide]
Packet Transmission in Xen Xenstore Packet Transmission/Reception [Sriram Govindan. Xen and co.: Communication-aware CPU Scheduling]
Xenstore and Xenbus • Xenstore • Exchange out of band information • Hierarchical filesystem like database used for sharing small amount of info between domains • Contains 3 main paths • /vm • /local/domain • /tools
Xenbus • Interface for Xenstore VMs XENSTORE USERSPACE XENBUS
Xenbus(cont.) • Provide APIs for reading and writing data • Support Transactions for group execution of operations • Watches • Associated with a key in xenstore • Change in value causes a call-back to be triggered • User-kernel interaction • register_xenbus_watch • register_xenstore_notifier
Implementation of Tool for Dynamic Network Configuration User input Command Userspace tool Domain Name Domain id Xenstore Write Format string and write to predefined-path
Implementation of Tool (cont.) NETBACK XENBUS Register initial Call-back Read initial value Register Call-back1 Xenstore Register Call-back2 If structures modified Register pre-defined path callback Call-back1 Y N Start-up Read new value Read path Update variable Call-back2 Xenstore write Transmit packets Array of structures and flags
Modification in Netback driver • Credit scheduling in the netback driver • By default, network scheduler assigns equal credits • Fair sharing of bandwidth • Modified to assign weighted credits Network Scheduler[ddp]
Can the bandwidth limit for each domain be set for step increase in bandwidth limit? C1- All doms 1MBps C2- dom4 2MBps C3 - dom3 2MBps dom4 3MBps C4 - dom2 2MBps dom3 3MBps dom4 4MBps 3.87 2.91 1.92 0.96 Bandwidth is limited to within 96.4%
Can the bandwidth limit for each domain be set for step decrease in bandwidth limit? 3.87 C1 –dom1 4 MBps dom2 3MBps dom3 2MBps dom4 1MBps C2 –dom1,2 3MBps dom3 2MBps dom4 1MBps C3- dom1,2,3 2MBps dom4 1MBps C4- All doms 1MBps 2.91 1.92 0.96 Bandwidth is limited to within 96.4%
How is the bandwidth shared for domains which are not bandwidth limited? C1 –No limit set C2 –dom5 1MBps C3- dom5 2MBps 1.93 0.95 Bandwidth is fairly shared for domains that are not bandwidth limited
How does the bandwidth change with time for a step change in limit for a single domain? Bandwidth settles in 6.7 sec (on an avg) for a step change
How does the bandwidth change with time for a step change in limit for multiple domains? Bandwidth limits set for multiple domains independently
How does the CPU utilization in DomU and Dom-0 vary for different bandwidth limits for a DomU? 206.6 193.4 195.2 190.3 182.3 169.6 144.7 54.5 12.4 11.4 9.7 8.1 6.7 4.7 3.2 1.6 DomU utilization increases proportionally While Dom-0 utilization does not
Summary of Work Done • Implemented a tool for specification and dynamic reconfiguration of bandwidth limits for VMs in Xen • Modified the Netback driver of Xen to provide bandwidth limits • Experimentation and Validation to verify the bandwidth limits configuration
Implementation Problems • Compilation, Installation & Debugging of Xen and domains • Code Exploration to understand Xen network I/O model. • Not much documentation available • Xenstore generally used for guest domain to domain-0 • Direct calling of xenbus API in Netback were giving problems • 2 Levels of call-back needed
Future Work • Simplification of the algorithm to implement the bandwidth guarantee • Study the effect of CPU scheduler on the bandwidth limits and guarantee • Validation with realistic I/O intensive applications