1 / 18

Interdomain transport for VMs

XenSocket Suzanne McIntosh Security, Privacy and Extensible Technologies IBM T. J. Watson Research Center skranjac@us.ibm.com. Interdomain transport for VMs. Xen Summit 2007 | April 17, 2007. Xe nS ocket Team. IBM T. J. Watson Research Center Xiaolan Zhang Pankaj Rohatgi

Download Presentation

Interdomain transport for VMs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. XenSocketSuzanne McIntosh Security, Privacy and Extensible Technologies IBM T. J. Watson Research Centerskranjac@us.ibm.com Interdomain transport for VMs Xen Summit 2007 | April 17, 2007

  2. XenSocket Team IBM T. J. Watson Research Center Xiaolan Zhang Pankaj Rohatgi Suzanne McIntosh BAE Systems John Linwood Griffin We would like to thank the following people: Ronald Perez Douglas Lee Schales Anthony Liguori Ryan Harper Muli Ben-Yehuda Eric Van Hensbergen Reiner Sailer Stefan Berger Wesley Most

  3. XenSocket Project – Background

  4. XenSocket Project – The Objective Use Xen to enhance security of System S, a stream processing system described as follows: • High-throughput, large-scale, distributed stream processing system • Security architecture being developed by IBM Research • Extracts important information by analyzing voluminous amounts of unstructured and mostly irrelevant data • Example applications • Analyze audio, video and data feeds to support trading activities in financial institutions • Disaster response through analysis of vehicular movements, traffic and other sensors, news reports, etc.

  5. XenSocket Project – The Requirements • Enhance security with minimal performance hit • Throughput is key metric of goodness for our purposes • Achieve throughput approaching that of UNIX domain socket • Especially targeting message sizes in 10 KB – 100 KB range • Facilitate porting of code to the Xen environment • Implement socket-based interface

  6. XenSocket Project – The Problem Unfortunately, interdomain communication using the Xen virtual network fell short of throughput required for System S. • Inefficient same-system networking performance is a well-known problem wrt VMs [1] [2] • We speculate that the inefficiency can be attributed to • Overhead incurred by TCP/IP stack • Repeated hypercalls to invoke Xen page flipping

  7. XenSocket Project – The Solution Proof-of-concept built and tested in Xen 3.0.2. • Bypass use of TCP stack • Replace page-flipping with copy • Statically allocate memory buffers to be shared between two domains • Use sockets-based interface to shared-memory-based transport • No modification to Xen or OS required • XenSocket compiles into a kernel module

  8. XenSocket Design – Overview • Sockets-based interface to shared memory buffers for domain-to-domain communication • Provides one-way tunnel between sender and receiver domains • Conserves memory in the event only one-way communication needed • Two types of memory pages shared by each endpoint • Descriptor page (4KB) for storing state and control information • Buffer pages (multiple 4KB pages) form the circular buffer for writing and reading data

  9. Control Plane available_bytes send_offset available_bytes recv_offset Shared Descriptor Page (4KB page) xen_sendmsg() xen_recvmsg() Sender App Receiver App Stream out Stream in Tx Data Rx Data Data Plane 128KB Shared Circular Buffer (32 4KB pages) XenSocket Architecture

  10. XenSocket Implementation – Receiver • Calls socket() API to create socket • Calls bind() API • Bind the socket to an address • Allocates physical memory for descriptor page and shared circular buffer • Returns grant table reference of descriptor page • Uses sender domain ID to allocate event channel for communication with sender • Calls read() or recv() for receiving data • Blocks until data is detected in circular buffer

  11. XenSocket Implementation –Sender • Calls socket() to create a socket • Calls connect() • Uses supplied receiver domain ID and grant table reference of shared descriptor page • Maps physical pages of shared circular buffer into the virtual address space of sender • Establishes other end of event channel to communicate events • Calls send() or write() to transmit data • Blocks if buffer full

  12. XenSocket Implementation– Data Transfer Algorithm • Core piece of implementation is an efficient data transfer algorithm • Send and receive algorithms use one shared control variable, available_bytes • Indicates number of bytes available for write in circular buffer • Sender and receiver maintain local read/write offsets into circular buffer • Offsets are not shared

  13. XenSocket Performance XenSocket evaluation environment • IBM HS20 blade with dual 2.8GHz Pentium Xeon processors and 4GB RAM • Used netperf version 2.4.2 as our primary benchmark • All data reported was run on Xen 3.0.2 and Linux version 2.6.16.18 • Each test was run 3 times, with the average reported • All experiments were run in single CPU mode with hyper-threading disabled to minimize performance variation

  14. XenSocket Performance – Common Message Sizes Throughput Comparison of XenSocket vs. Unix Domain Socket and TCP for message sizes between 512 Bytes and 16 KB. XenSocket achieves up to 72 times the throughput of standard TCP stream at message size of 16 KB.

  15. XenSocket Performance – Larger Message Sizes Throughput Comparison of XenSocket vs. Unix Domain Socket and TCP for large message sizes. Both XenSocket and Unix Domain Socket see a large drop-off when the message size reaches 512 KB and then stabilize around 5-6 Mb/s. The performance curves invert at message size of 512 KB where XenSocket outperforms Unix Domain Socket.

  16. Next Steps What about performance of TCP under Windows and other OSs? How does XenSocket perform in current Xen release? Our design of a XenSocket is a one-way communication pipe between two domains. • Traditional view of a socket is a two-way mechanism • Alternate design would include variable-size circular buffers with logic capable of adapting the buffer reservation size to actual usage of the buffer

  17. References • A. Menon, A. L. Cox, and W. Zwaenepoel. Optimizing network virtualization in Xen. In 2006 USENIX Annual Technical Conference, pages 15–28, Boston, Massachusetts, USA, June 2006. • A. Menon, J. R. Santos, Y. Turner, G. J. Janakiraman, and W. Zwaenepoel. Diagnosing performance overheads in the Xen virtual machine environment. In VEE’05: First International Conference on Virtual Execution Environments, pages 13–23, Chicago, Illinois, USA, June 2005.

  18. Questions?Suzanne McIntoshSecurity, Privacy and Extensible TechnologiesIBM T. J. Watson Research Centerskranjac@us.ibm.com

More Related