280 likes | 430 Views
Virtualizing a Wireless Network: The Time-Division Approach. Suman Banerjee, Anmol Chaturvedi, Greg Smith, Arunesh Mishra Contact email: suman@cs.wisc.edu http://www.cs.wisc.edu/~suman. Department of Computer Sciences University of Wisconsin-Madison.
E N D
Virtualizing a Wireless Network:The Time-Division Approach Suman Banerjee, Anmol Chaturvedi, Greg Smith, Arunesh Mishra Contact email: suman@cs.wisc.edu http://www.cs.wisc.edu/~suman Department of Computer Sciences University of Wisconsin-Madison Wisconsin Wireless and NetworkinG Systems (WiNGS) Laboratory
Virtualizing a wireless network • Virtualize resources of a node • Virtualize the medium • Particularly critical in wireless environments • Approaches • Time • Frequency • Space • Code Courtesy: ORBIT
Virtualizing a wireless network • Virtualize resources of a node • Virtualize the medium • Particularly critical in wireless environments • Approaches • Time • Frequency • Space • Code Expt-3 Expt-2 Expt-1 Time Expt-3 Expt-2 Expt-1 Space, Freq, Code, etc.
TDM-based virtualization • Need synchronous behavior between node interfaces • Between transmitter and receiver • Between all interferers and receiver A B A B Expt-1 Expt-1 C D A B A B Expt-1 Expt-2 C D
Problem statement To create a TDM-based virtualized wireless environment as an intrinsic capability in GENI • This work is in the context of TDM-virtualization of ORBIT
Current ORBIT schematic Controller Node UI nodeHandler nodeAgent Node Node • Manual scheduling • Single experiment on grid Node
Node Node Overseer VM nodeAgent VM nodeAgent VM nodeAgent Our TDM-ORBIT schematic Controller Master Overseer VM = User Mode Linux UI nodeHandler nodeHandler nodeHandler • Virtualization: abstraction + accounting • Fine-grained scheduling for • multiple expts on grid • Asynchronous submission
Node Node Overseer experiment queue Overseers • Node overseer: • - Add/remove experiment VMs • Swap experiment VMs • Monitor node health and experiment status • Mostly mechanism, no policy • Master overseer: • Policy-maker that governs the grid Controller UI submit Master Overseer scheduler mcast commands handler handler reporting feedback handler monitor
Virtualization • Why not process-level virtualization? • No isolation • Must share FS, address space, network stack, etc. • No cohesive “schedulable entity” • What other alternatives are there? • Other virtualization platforms (VMware, Xen, etc.)
Host Kernel net_80211 TDM: Virtualization • Virtualization • Experiment runs inside a User-Mode Linux VM • Wireless configuration • Guest has no way to read or set wifi config! • Wireless extensions in virtual driver relay ioctls to host kernel Node tunneled ioctl() Guest VM UML Kernel virt_net ioctl() iwconfig
wifi TDM: Routing ingress Node Routing Table VM experiment channel iptables 192.169.x.y DNAT: 192.169 -> 192.168 VM forwarded to all VMs in mcast group mrouted eth nodeHandler commands (multicast) VM 10.10.x.y 192.168.x.y
Synchronization challenges • Without tight synchronization, experiment packets might be dropped or misdirected • Host: VMs should start/stop at exactly the same time • Time spent restoring wifi config varies • Operating system is not an RTOS • Ruby is interpreted and garbage-collected • Network latency for overseer commands • Mean: 3.9 ms, Median: 2.7 ms, Std-dev: 6 ms • Swap time between experiments
Synchronization: Swap time I • Variables involved in swap time • Largest contributor: wifi configuration time • More differences in wifi configuration = longer config time • Network latency for master commands • Ruby latency in executing commands
Synchronization: Swap Time II • We can eliminate wifi config latency and reduce the effects of network and ruby latencies • “Swap gaps” • A configuration timing buffer • VMs not running, but incoming packets are still received and routed to the right place
Ruby Network Latency • Inside VM, Ruby shows anomalous network latency • Example at right: tcpdump and simple ruby recv loop • No delays with C • Cause yet unknown 00.000 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3000.035 received 30 bytes01.037 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 3001.065 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 5601.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4001.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4501.143 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4411.018 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3012.071 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4523.195 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3024.273 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4526.192 received 30 bytes34.282 IP 10.11.0.1.4266 > 224.4.0.1.9006: UDP, length 3035.332 IP 10.11.0.1.4263 > 224.4.0.1.9006: UDP, length 4540.431 received 56 bytes40.435 received 40 bytes40.438 received 45 bytes40.450 received 44 bytes40.458 received 30 bytes40.462 received 45 bytes40.470 received 30 bytes40.476 received 45 bytes40.480 received 30 bytes40.484 received 45 bytes 24+ secs
UI screen shots Time slice 1 Time slice 2
Performance: Runtime Breakdown • Booting a VM is fast • Each phase slightly longer in new system • Ruby network delay causes significant variance in data set • Handler must approximate sleep times
Performance: Overall Duration • Advantages • Boot duration • Disadvantages • Swap gaps
Future work: short term • Improving synchrony between nodes • More robust protocol • Porting Ruby code to C, where appropriate • Dual interfaces • Nodes equipped with two cards • Switch between them during swaps, so that interface configuration can be preloaded at zero cost
Essid: “expA” Mode: B Channel: 6 VM nodeAgent wifi0 VM VM nodeAgent nodeAgent wifi1 Essid: “expB” Mode: G Channel: 11 Dual interfaces Routing Logic “current card is…” config Node Overseer
Future work: long term • Greater scalability • Allow each experiment to use, say 100s of nodes, to emulate 1000s of nodes • Intra-experiment TDM virtualization • Initial evaluationis quite promising
Intra-experiment TDM Any communication topology can be modeled as a graph
Intra-experiment TDM We can emulate all communication on the topology accurately, as long as we can emulate the reception behavior of the node with the highest degree
Intra-experiment TDM Testbed of 8 nodes Time Unit 1 Time-share of different logical nodes to physical facility nodes
Intra-experiment TDM Testbed of 8 nodes Time-share of different logical nodes to physical facility nodes Time Unit 2
Intra-experiment TDM Testbed of 8 nodes Time-share of different logical nodes to physical facility nodes Time Unit 3
Some challenges • How to perform the scheduling? • A mapping problem • How to achieve the right degree of synchronization? • Use of a fast backbone and real-time approaches • What are the implications of slowdown? • Bounded by the number of partitions
Conclusions • Increased utilization through sharing • More careful tuning needed for smaller time slices • Need chipset vendor support for very small times • Non real-time apps, or apps with coarse real-time needs are best suited to this virtualization approach