400 likes | 509 Views
VirtualTransits : a Platform for Network Virtualization across Data Centers. Mon-Yen Luo and Jun-Yi Chen Department of Computer Science & Information Engineering, National Kaohsiung University of Applied Sciences, Taiwan. Outline. Introduction Motivation and Problem System Design
E N D
VirtualTransits : a Platform for Network Virtualization across Data Centers Mon-Yen Luo and Jun-Yi Chen Department of Computer Science & Information Engineering, National Kaohsiung University of Applied Sciences, Taiwan
Outline • Introduction • Motivation and Problem • System Design • Intra-cloud Mechanisms • Inter-Cloud Mechanisms • Performance Evaluation • Conclusion
Introduction • Modern data centers for cloud computing contains tens of thousands of physical machines and support numerous tenants with different bandwidth requirements. • Such highly distributed data environments have network requirements that are distinctly different from those of general-purpose networks.
Motivation • Previous research on cloud network has mainly focused on networking mechanisms inside the data center. • Such as VL2, PortLand, and NetLord. • However, little attention has been paid to networking mechanisms for the integration of multiple datacenters. • We need a mechanism to support efficient and coherent management of virtual network across data centers to achieve some important capabilities, such as • Virtual Machine Migration • Managed traffic path among middlebox • Cloud federation
Typical Data Center Network A typical datacenter has a mix of numerous bare metal and virtualized servers; Mix of physical and Virtual switches Aggregation Switch Aggregation Switch Aggregation Switch Internet ToR Switch ToR Switch ToR Switch VM Physical Server Physical Server Core VM VM VM VM VM VM vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch VM VM VM VM VM VM VM
Typical Data Center Network Generally, a typical datacenter network involves a multilevel tree architecture. Machines are organized into racks and rows under the logical hierarchical network tree. Each rack contains several machines interconnected by a top-of-rack (ToR) switch that serves as the leaves of the tree and delivers high bandwidth to directly connected hosts. Aggregation Switch Aggregation Switch Aggregation Switch Internet ToR Switch ToR Switch ToR Switch VM Physical Server Physical Server Core VM VM VM VM VM VM vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch VM VM VM VM VM VM VM
Typical Data Center Network Switches at the leaves have a limited number of high-speed (e.g., 10 GigE) uplinks to one or more network devices that aggregate and transfer packets among the leaf switches. Aggregation Switch Aggregation Switch Aggregation Switch Internet ToR Switch ToR Switch ToR Switch VM Physical Server Physical Server Core VM VM VM VM VM VM vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch VM VM VM VM VM VM VM
Typical Data Center Network In the root levels of the tree, there are core switches with significant high throughput and switching capacity to relay traffic for inter-row communication. Aggregation Switch Aggregation Switch Aggregation Switch Internet ToR Switch ToR Switch ToR Switch VM Physical Server Physical Server Core VM VM VM VM VM VM vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch VM VM VM VM VM VM VM
Problems : Intra Datacenter Aggregation Switch Aggregation Switch Aggregation Switch ToR Switch ToR Switch ToR Switch Application Physical Server VM VM VM VM Data Base Portal vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch Physical Server An enterprise application often uses a multi-tiered architecture of server systems. For example, here is a typical three-tiered web server system. We need to find an efficient way to deploy and manage networks of each tenant service in such a distributed environment. VMs belonging to the same service may be hosted on or be migrated to various physical hosts across server racks. VM VM VM VM VM VM VM
Problems : Intra Datacenter Aggregation Switch Aggregation Switch Aggregation Switch ToR Switch ToR Switch ToR Switch Firewall Physical Server VM VM VM-2 VM VM-1 VM vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch vSwitch Physical Server VM VM VM VM VM VM VM An enterprise application often needs a diverse array of network appliances such as firewall and server load balancers. It is challenging to host enterprise applications and their desired topologies on the cloud because they require distributed manual configurations and ensure traffic through the appropriate application and network appliances .
Problems : Inter Datacenter Currently, there are 7 universities join us to become a federated cloud platform. We also have a international partner, that is, iCAIR at Northwestern University. Essentially, these dada centers are interconnected by the public internet.
Problems : Inter Datacenter VMs belonging to the same service may be hosted on various physical hosts across different network domains. Dynamic interconnections among many resources at multiple remote sites on-demand are required in order for these VMs to communicate with each other over a private virtual network.
Problems : Inter Datacenter The problem is: How to dynamically build virtual network with any desired topology over the production network? And the traffic of each virtual network should be isolated and protected from other internet traffic?
Design Issues • Flexible: Ideally, a cloud network should provide a network abstraction that allows a tenant to design its network as if it were the sole occupant of a datacenter. The proposed system should provide an efficient mechanism to dynamically create each network for tenants. • Compatible: Many researchers have proposed some important and practical schemes within a single datacenter network. The proposed system should find a way to be compatible with these existing approaches. • Practical: The proposed system must be practically deployable,with commodity switches and real production networks. We strive for practical approaches that take into account the realities of practical environments.
System Overview We design the system as a layered software stack, providing the hooks to our previous work or other middleware and serving as a control plane to orchestrate all operations of virtual networks. 15
System Overview A service management system enables a tenant to request system resources and service components. With a graphical user interface, users can make requests to resources, resource registration, user authentications, and monitoring. 16
System Overview The resources are discovered and allocated by the control frameworks. After the resources are discovered, the appropriate service instances can be instantiated on the designated nodes with some virtual infrastructure management system such as Eucalyptus. 17
System Overview A virtual network description module is implemented to parse the requirement of service requesters. After the topology of a virtual network is determined, the VirtualTransits system is invoked to communicate with the corresponding nodes, creating a virtual network for this tenant service. 18
VirtualTransits System Basically, the VirtualTransits system is composed of two parts: that is, the intra-cloud and the inter-cloud. Each tenant service has its own virtual network with its own particular address space. Each virtual network may also be configured with a VLAN tag. For example, in the figure, the VM with purple color belong to the same virtual network….
VirtualTransits System For inter-cloud, we propose a gateway system with a novel VLAN translation mechanism to enable the virtual private network spread over public Internet and more effective usage of VPLS facilities among datacenters.
Intra-Cloud Mechanisms • The basic idea of our design is to make a coherent way to dynamically connecting and configuring the virtual switching elements distributed on multiple nodes to enable arbitrary virtual networks. • Our implementation for intra-cloud mechanisms is logically composed by two major parts: • Generic control interface • Virtual Switch Handler
Generic Control Interface • Provides a set of functions related to creation and management of virtual networks. • Serves as a common set of network abstractions to allow users to enable multiple VMs at different sites so they can be interconnected. • Currently, we provide the following primitives to abstract a virtual network and implemented the following basic operations to manage a virtual network: • Switching element • functions provided for creating, removing, and monitoring a virtual switch in a dedicated node. • Virtual port • functions provided for adding a virtual port to binding to a VM, removing a virtual port, or disassociating a port with a VM.
Generic Control Interface (cont.) • Linking: • functions for creating, modifying, and removing connection path between virtual ports. • We use these functions to create topology for a given virtual network. • Policy: • functions setting a path constraint, such as QoS constraints. • Based on these function libraries, we have implemented some core modules of the virtual network management system.
Virtual Switch Handler • Serves as the drivers between the required functions of control interface and the underlying virtual switches. • The implementations depend on the implementation of various virtual switches. • To prove this concept, we have implemented two handlers: one is for OpenvSwitch and the other is for VMware’s vSwitch respectively.
Inter-Cloud Mechanism POX Controller • VLAN Translation Mechanism • Provide a control API to applications or services • Program the edge OF switch to transmit the tagged-traffic across OF networks • Dynamic learning the host location across OF networks with the same VLAN ID • We utilize OpenFlow protocol to control the virtual network. • We implemented some control function in the POX controller. • The gateway system in implemented by NetFPGA-based OpenFlow Switch. We implemented a new action to do the VLAN Translation.
Operation of VirtualTransits This Figure represents a real deployment example in our federated datacenter platform. In this example, five VMs (VM1 to VM5) belonging to a single tenant were allocated to different sites. The sub-network in site A used VLAN 100 for intra-cloud communication, and the sub-network in site C used VLAN 200.
Operation of VirtualTransits We use the OpenFlow protocol and extended the POX controller by implementing several modules and two associated tables (ARP table and transit table) to enable virtual transits. When a virtual network is deployed, the related information will be updated to the two tables in the controller.
Operation of VirtualTransits An offline algorithm is invoked to compute the transit paths and configure the related information into the transit table. As the example illustrated in this Figure, that virtual network was deployed using VPLS path 2782 (from Site A to Site C) and VPLS 2781 (from Site C to Site B).
Operation of VirtualTransits When the first packet of the traffic flow from VM1 to VM4 arrives at the OFSA, the OFSA sends a control packet to the controller because it is missing in the local flow table. The information (such as OSF ID, VLAN ID, and Destination MAC) from the control packet is used as keys to search the mapping entry in the transit table.
Operation of VirtualTransits If a mapping entry is found, the transit VLAN ID and output port will be sent back to the OFS as an action that instructs the OpenFlow switch to forward a packet. In our example, the packet from VM1 to VM4 will be sent to OFSC, and the VLAN tag will be translated from 100 to 2782.
Operation of VirtualTransits Then the packet is sent to OFSB following a similar process (i.e., translate the VLAN 2782 to 2781), and the packet is sent to Site B by translating 2781 to 200. The gateway system manipulates the two tables to conduct VLAN tag translation between the network domains.
Performance Evaluation We design the following three scenarios to perform the performance comparison: • Baseline: The VMs are connected across some commodity high-performance Ethernet Switches With VLAN and trunks. • VLAN translation: The VMs are connected by the proposed system, i.e., by the VLAN translation and modules in distributed virtual switches. • GRE Tunnel: VMs by the GRE tunnel encapsulated by a kernel module.
Performance Result • Here is the result for measuring Throughput and Additional Overhead • We can see that the achieved throughput of the proposed system is near the throughput achieved by the baseline system. • The proposed system incurs little overhead, so we can conclude that the VLAN translation mechanism and its NetFPGA implementation are efficient. • The IP tunnel approach demonstrated the worst performance because it needs to encapsulate the packet by software. Figure 4: Performance Result of Throughput
Performance test Over Public Internet With the proposed system in this paper, we can easily set up a virtual network as shown in the previous Figure for proof-of-concept and performance measurement in real production networks.
Performance test Over Public Internet The green path, path 1, is routed by IP tunnel via the public IP route, and the red path, path 2, is from the “virtual transit” created by the proposed mechanism.
Performance Result • WAN Measurement • The data show that the latency of path 2 is higher than that of path 1. This is because path 2 is a longer and triangular path. • These data show that our approach can enable a multipath between two datacenters over the WAN. • The results also show that the proposed mechanism performs well in a WAN environment after a long-running test. Figure 5: Performance Result of Latency
Contribution • We propose a platform to dynamically build and manage virtual networks across multiple data centers. The specific contributions presented in this paper are the following: • we propose a novel mechanism called VirtualTransits to transparently extend a virtual network across one or more data centers. • we present an integrated system to incorporate several important datacenter networking schemes into a coherent platform that enables the dynamic configuration and management of virtual networks both intra-cloud and inter-cloud. • we provide the results of performance measurements from the implementation based on real production networks. Our system can setup a new path and stretch a virtual network across datacenters in 2 seconds. By comparison, the previous approaches using VPN need 27 seconds.
Conclusion • The salient advantage of our system is that it can support incremental deployment without specific wiring topologies or significant modifications to switches and hypervisors. • Unlike state-of-the-art solutions, the presented solution can provide dynamic virtual networks for a federation of independent infrastructure providers across the production networks. • The system is currently using for cloud federation and research-testbed interconnection. The performance data show that our approach performs well in real deployment.
Future Work • Efficient path selection and allocation is important but not well addressed in this paper. • The negotiation and enforcement of service level agreement across multiple administrative domains is needed for some critical applications or enterprise services.
Thank You ! Further Question: myluo@kuas.edu.tw