130 likes | 242 Views
MPLS And The Data Center Adrian Farrel Old Dog Consulting / Juniper Networks adrian@olddog.co.uk afarrel@juniper.net. www.mpls2012.com. Agenda. What do I mean by “Data Center”? Design goals and requirements Handling mobility within the data center Connectivity between data center sites
E N D
MPLS And The Data CenterAdrian FarrelOld Dog Consulting / Juniper Networksadrian@olddog.co.ukafarrel@juniper.net www.mpls2012.com
Agenda • What do I mean by “Data Center”? • Design goals and requirements • Handling mobility within the data center • Connectivity between data center sites • Can MPLS add value?
Everyone’s Data Center is Different DC DC • There are some common fundamental concepts • Racks of servers • VMs hosted on blades • VMs connected • On server • In rack • In DC • In other DCs • Connectivity to the externalservices IP/MPLS Network Services L3 L2 VM VM Top of Rack Switch VSw Storage LB NAT FW VMs on Server Blades VM-based Appliances
Design Goals • Provide separate logical tenant networks in Data Center over common IP physical infrastructure • Design Goal: 100K tenants, 10M Virtual Machines (VMs) • Need a data plane encapsulation • Examples exist • Virtual Extensible Local Area Networks (VXLAN) • Network Virtualization using Generic Routing Encapsulation (NVGRE) • Discovery is needed • Data plane learning seems popular • ARP doesn’t scale and needs to be suppressed • Maybe the control plane can help • A control plane is also required • Static configuration is a solution (Hypervisor with SDN?) • A control plane can make life a lot easier
Multi-Tenancy : Requirements • Multi-tenancy has become a core requirement of data centers • Including for Virtualized Machines (VMs) and VM multi-tenancy • It prooves a real stretch • Three key requirements needed to support multi-tenancy are • Traffic isolation • Address independence • Fully flexible VM placement and migration • IETF’s NVO3 WG considers approaches to multi-tenancy that reside at the network layer rather than using traditional isolation (e.g., VLANs) • An overlay model to interconnect VMs distributed across a data center • We already have network layer overlay solutions • More about this later
Mobility • Virtual Machines need to be moved between blades • How often? • Dynamic load balancing • Planned service • Failure recovery • How much? • Blades, servers, racks • How seamless? • Application re-start • Packet loss • Hitless • Challenges are recovery/preservation of connectivity • VMs need to preserve identity • L2 or L3? • Need rapid location discovery/advertisement
Inter Data Center Connectivity • Many reasons for connectivity • Applications in different DCs need to talk • VMs may be gathered into VPNs (virtual VPNs?) • One application’s data might be stored in anther DC • Stored data has to be synched between DCs • Connectivity between DC sites is like VPN connectivity • Except it may be “tunnelling” virtual VPN connectivity • And, of course, connectivity to the outside world
What do we Mean by MPLS? • Odd time and place to be asking this question • MPLS offers a versatile encapsulation technique • Small headers • Nested encapsulation • Simple forwarding • Special meaning labels • MPLS provides a range of control plane protocols • These have different applicabilities • Some are more complex than others • Supports static configuration
The E-VPN • Designed for scalability and ease of deployment • Provider Edge (PE) can be in ToR switch and/or Hypervisor • Operator defined networks – mesh, hub & spoke, extranets, etc • Control plane learning using BGP • VM Mobility – all PEs know VM’s E-VPN location • VPN and Virtual LAN auto-discovery • ARP flood suppression • Control-plane scaling using Route Reflectors, RT Constrain, ESI, MAC aggregation • Control & data plane traffic for VPNs only sent to PE with active VPN members • Scalable fast convergence using Block MAC address withdrawal • Support for MAC prefixes (e.g., default MAC route to external DC) • Broadcast & Multicast traffic over multicast trees or ingress replication • Active/active multi-homing • CE sees LAG, PEs see Ethernet Segment (set of attachments to same CE) • 4B tenant VPNs, 4B virtual LANs per tenant VPN
MPLS E-VPN Routes • MAC Advertisement Route • Distributes MAC & IP address to PE & MPLS label binding • Per EVI Ethernet AD Route • Distributes Ethernet Segment to PE & MPLS label binding • Used in active/active multi-homing • Both carry a 24 bit MPLS label field • Use of MPLS label is very similar to VNID but supports local significance • Distribute VNID in MPLS label field • Either global or local significance • Local significance allows it to represent EVI, Port, MAC address, or MAC address range • Data plane encapsulation specified using Tunnel Encapsulation attribute (RFC 5512) • Distributed with both of the above routes
E-VPN is Encapsulation Agnostic • E-VPN Instance can support multiple data plane encapsulations (MPLS, VXLAN, NVGRE, etc.) • MPLS encapsulation is just one option • Encapsulations advertised in BGP, ingress uses encapsulation supported by egress • This use of BGP is not complicated • Broadcast & multicast use encapsulation-specific shared trees • Allows interoperability with existing E-VPN & L3VPN deployments • This makes inter-DC really easy
Is MPLS The Answer? • What was the question? • Do we need another control plane protocol? • Why can’t we use what we already have? • Frankly, BGP is not that hard and does what we need • Can we integrate the DC with the outside world? • Gateways, tunnelling and encapsulation are always possible • Protocol gateways are a bit of a mess • E-VPN and L3VPN connectivity just works • Do we need another L2 encapsulation? • There are plenty available, just pick your favorite • This is an MPLS conference
Questions? afarrel@juniper.net adrian@olddog.co.uk