180 likes | 220 Views
This paper introduces SecondNet, a network virtualization architecture developed by researchers from Microsoft Research, Huawei Technologies, and various universities. SecondNet guarantees bandwidth for Virtual Data Centers (VDCs) using Port-Switching based Source Routing. The architecture ensures efficient VDC allocation, scalability, and practical deployment on different network topologies.
E N D
SecondNet: A Data Center Network Virtualization Architecture with Bandwidth Guarantees Chuanxiong Guo1, Guohan Lu1, Helen J. Wang2, Shuang Yang3, Chao Kong4, Peng Sun5, Wenfei Wu6, Yongguang Zhang1 1Microsoft Research Asia, 2Microsoft Research Redmond, 3Stanford University 4Huawei Technologies, 5Princeton University, 6University of Wisconsin-Madison Dec 2, 2010 Philadelphia, USA
Outline • Background • VDC abstraction and service model • SecondNet architecture • Port-Switching based Source Routing • VDC allocation • Experimental results • Related work • Conclusion
Background Source: Microsoft. • Network virtualization with bandwidth guarantee
VDC VDC VDC VDC VDC VDC VDC0 VDC1 VDCn IP packet DCN Virtualization SecondNet ControlIF DataIF VM, switch, server mgt PSSR packet Topology updates ControlIF Data IF DCN Infrastructures Fat-tree DCell BCube VL2 Others
vm0 vm1 vm2 vm3 vm4 VDC VDC VDC VDC VDC VDC VDC0 VDC1 VDCn • Virtual Data Center (VDC) • A set of VMs plus a SLA • Every VDC has its own (private) IP address space • Service model • Best-effort • Type-1: local egress/ingress bandwidth guarantee • Type-0: bandwidth guarantee between any two VMs 500Mb/s IP packet DCN Virtualization SecondNet ControlIF DataIF PSSR packet VM, switch, server mgt Topology updates ControlIF Data IF DCN Infrastructures Fat-tree DCell BCube VL2 Others
Challenges in VDC bandwidth guarantee • Timely and efficient VDC allocation and expansion • NP-hard problem • Scalable VDC state maintenance • State: VM to physical server mapping, bandwidth reservation, routing path • The state number can easily reach tens of millions • Practical deployment • Applicable to various topologies (dcell, bcube, fat-tree, vl2) and addressing scheme • Implementable using commodity servers and switches • Failure handling
SecondNet • Logically centralized VDC manager • Efficient and low time-complexity VDC allocation • Failure handling • Put virtualization and bandwidth reservation state into servers • All the state at server hypervisors • Stateless switch-core • Port-switching based source routing • Make Secondnet applicable for all network topologies • Deployable with current commodity switches
PSSR path: VDC0 VM0->VM1 requests VDC Manager users 0 2 2 1 4 5 4 5 stateless switches 0 1 2 3 2 2 2 3 3 3 2 3 trusted domain 0 1 1 0 0 1 0 1 0 1 2 3 s1 s0 0 1 0 1 0 1 hypervisor hypervisor stateful servers State: v2p, band resv, pssr paths State: v2p, band resv, pssr paths VDC0 VM0 VDC1 VM0 VDC0 VM1 VDC1 VM1 untrusted VMs
Port switching based source routing (PSSR) • Source routing • Pin routing path for bandwidth guarantee • Keep state only at server hypervisors • Port-switching • Given the topology is known, port number based forwarding is possible • Simpler switching functionality • PSSR • Stateless switch-core • Addressing agnostic • Can be implement using MPLS
PSSRexample 0 0 0 0 0 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 4 5 vdc0 vdc0 vdc0 vdc0 4 5 stateless switches 0 1 2 3 2 2 2 3 3 3 2 3 trusted domain 0 1 1 0 0 1 1 0 1 0 2 3 vdc0 ip1 ip1 ip1 ip1 ip1 ip1 ip1 ip0 ip0 ip0 ip0 ip0 ip0 ip0 data data data data data data data s1 s0 0 1 0 1 0 1 hypervisor0 hypervisor1 stateful servers VDC0 VM0(ip0) VDC0 VM1(ip1) untrusted VMs
VDC allocation 0: Cluster pre-calculation divide servers into clusters of different sizes 1: Cluster selection 2: Min-cost flow 3: Routing path
Simulation: VDC allocation time Fat-tree with 27,648 servers VL2 with 103,680 servers BCube with 4,096 servers
Implementation Child partition Root partition Hyper-v mgr WMI To VDC mgr App App User space Kernel space TCP/IP TCP/IP V-NIC VMNIC VMSwitch VMBus secondnet.sys recv send Policy mgr V2P table V2P table V2P table Neigh maint Port-switching NDIS NIC Driver
Testbed • A BCube testbed • 16 servers (Dell Precision 490 workstation with Intel 2.00GHz dualcore CPU, 4GB DRAM, 160GB disk) • 8 8-port mini-switches (DLink 8-port Gigabit switch DGS-1008D) • NIC • Intel Pro/1000 PT quad-port Ethernet NIC • NetFPGA
Experiment: bandwidth guarantee Physical topology: fat-tree VDC1 and VDC2 both have 24 VMs Each server has one VM for each VDC VDC1 VDC2 0 1 7 15 8 9 16 17 23
Related work • DCN virtualization • Seawall, Netshare • VL2 • Amazon VPC, EC2 • Virtual network allocation • Simulated annealing • Virtual network embedding • Bandwidth guarantee • IntServ, DiffServ • VPN hose model
Summary • VDC as abstraction and resource allocation unit • SecondNet as the network virtualization layer for VDC isolation and performance guarantee • Virtualization and bandwidth guarantee state at server hypervisors • VDC manager for VDC allocation and failure handling • Port-switching based source routing for implementation • Future work