1 / 28

Virtualizing a Wireless Network: The Time-Division Approach

Virtualizing a Wireless Network: The Time-Division Approach. Suman Banerjee, Anmol Chaturvedi, Greg Smith, Arunesh Mishra Contact email: suman@cs.wisc.edu http://www.cs.wisc.edu/~suman. Department of Computer Sciences University of Wisconsin-Madison.

vivi
Download Presentation

Virtualizing a Wireless Network: The Time-Division Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtualizing a Wireless Network:The Time-Division Approach Suman Banerjee, Anmol Chaturvedi, Greg Smith, Arunesh Mishra Contact email: suman@cs.wisc.edu http://www.cs.wisc.edu/~suman Department of Computer Sciences University of Wisconsin-Madison Wisconsin Wireless and NetworkinG Systems (WiNGS) Laboratory

  2. Virtualizing a wireless network • Virtualize resources of a node • Virtualize the medium • Particularly critical in wireless environments • Approaches • Time • Frequency • Space • Code Courtesy: ORBIT

  3. Virtualizing a wireless network • Virtualize resources of a node • Virtualize the medium • Particularly critical in wireless environments • Approaches • Time • Frequency • Space • Code Expt-3 Expt-2 Expt-1 Time Expt-3 Expt-2 Expt-1 Space, Freq, Code, etc.

  4. TDM-based virtualization • Need synchronous behavior between node interfaces • Between transmitter and receiver • Between all interferers and receiver A B A B Expt-1 Expt-1 C D A B A B Expt-1 Expt-2 C D

  5. Problem statement To create a TDM-based virtualized wireless environment as an intrinsic capability in GENI • This work is in the context of TDM-virtualization of ORBIT

  6. Current ORBIT schematic Controller Node UI nodeHandler nodeAgent Node Node • Manual scheduling • Single experiment on grid Node

  7. Node Node Overseer VM nodeAgent VM nodeAgent VM nodeAgent Our TDM-ORBIT schematic Controller Master Overseer VM = User Mode Linux UI nodeHandler nodeHandler nodeHandler • Virtualization: abstraction + accounting • Fine-grained scheduling for • multiple expts on grid • Asynchronous submission

  8. Node Node Overseer experiment queue Overseers • Node overseer: • - Add/remove experiment VMs • Swap experiment VMs • Monitor node health and experiment status • Mostly mechanism, no policy • Master overseer: • Policy-maker that governs the grid Controller UI submit Master Overseer scheduler mcast commands handler handler reporting feedback handler monitor

  9. Virtualization • Why not process-level virtualization? • No isolation • Must share FS, address space, network stack, etc. • No cohesive “schedulable entity” • What other alternatives are there? • Other virtualization platforms (VMware, Xen, etc.)

  10. Host Kernel net_80211 TDM: Virtualization • Virtualization • Experiment runs inside a User-Mode Linux VM • Wireless configuration • Guest has no way to read or set wifi config! • Wireless extensions in virtual driver relay ioctls to host kernel Node tunneled ioctl() Guest VM UML Kernel virt_net ioctl() iwconfig

  11. wifi TDM: Routing ingress Node Routing Table VM experiment channel iptables 192.169.x.y DNAT: 192.169 -> 192.168 VM forwarded to all VMs in mcast group mrouted eth nodeHandler commands (multicast) VM 10.10.x.y 192.168.x.y

  12. Synchronization challenges • Without tight synchronization, experiment packets might be dropped or misdirected • Host: VMs should start/stop at exactly the same time • Time spent restoring wifi config varies • Operating system is not an RTOS • Ruby is interpreted and garbage-collected • Network latency for overseer commands • Mean: 3.9 ms, Median: 2.7 ms, Std-dev: 6 ms • Swap time between experiments

  13. Synchronization: Swap time I • Variables involved in swap time • Largest contributor: wifi configuration time • More differences in wifi configuration = longer config time • Network latency for master commands • Ruby latency in executing commands

  14. Synchronization: Swap Time II • We can eliminate wifi config latency and reduce the effects of network and ruby latencies • “Swap gaps” • A configuration timing buffer • VMs not running, but incoming packets are still received and routed to the right place

  15. Ruby Network Latency • Inside VM, Ruby shows anomalous network latency • Example at right: tcpdump and simple ruby recv loop • No delays with C • Cause yet unknown 00.000 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3000.035 received 30 bytes01.037 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 3001.065 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 5601.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4001.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4501.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4411.018 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3012.071 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4523.195 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3024.273 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4526.192 received 30 bytes34.282 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3035.332 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4540.431 received 56 bytes40.435 received 40 bytes40.438 received 45 bytes40.450 received 44 bytes40.458 received 30 bytes40.462 received 45 bytes40.470 received 30 bytes40.476 received 45 bytes40.480 received 30 bytes40.484 received 45 bytes 24+ secs

  16. UI screen shots Time slice 1 Time slice 2

  17. Performance: Runtime Breakdown • Booting a VM is fast • Each phase slightly longer in new system • Ruby network delay causes significant variance in data set • Handler must approximate sleep times

  18. Performance: Overall Duration • Advantages • Boot duration • Disadvantages • Swap gaps

  19. Future work: short term • Improving synchrony between nodes • More robust protocol • Porting Ruby code to C, where appropriate • Dual interfaces • Nodes equipped with two cards • Switch between them during swaps, so that interface configuration can be preloaded at zero cost

  20. Essid: “expA” Mode: B Channel: 6 VM nodeAgent wifi0 VM VM nodeAgent nodeAgent wifi1 Essid: “expB” Mode: G Channel: 11 Dual interfaces Routing Logic “current card is…” config Node Overseer

  21. Future work: long term • Greater scalability • Allow each experiment to use, say 100s of nodes, to emulate 1000s of nodes • Intra-experiment TDM virtualization • Initial evaluationis quite promising

  22. Intra-experiment TDM Any communication topology can be modeled as a graph

  23. Intra-experiment TDM We can emulate all communication on the topology accurately, as long as we can emulate the reception behavior of the node with the highest degree

  24. Intra-experiment TDM Testbed of 8 nodes Time Unit 1 Time-share of different logical nodes to physical facility nodes

  25. Intra-experiment TDM Testbed of 8 nodes Time-share of different logical nodes to physical facility nodes Time Unit 2

  26. Intra-experiment TDM Testbed of 8 nodes Time-share of different logical nodes to physical facility nodes Time Unit 3

  27. Some challenges • How to perform the scheduling? • A mapping problem • How to achieve the right degree of synchronization? • Use of a fast backbone and real-time approaches • What are the implications of slowdown? • Bounded by the number of partitions

  28. Conclusions • Increased utilization through sharing • More careful tuning needed for smaller time slices • Need chipset vendor support for very small times • Non real-time apps, or apps with coarse real-time needs are best suited to this virtualization approach

More Related