430 likes | 600 Views
Stanford University. Software Defined Networks and OpenFlow SDN CIO Summit 2010 Nick McKeown & Guru Parulkar. In collaboration with Martin Casado and Scott Shenker And contributions by many others. Executive Summary. The network industry is starting to restructure
E N D
Stanford University Software Defined Networksand OpenFlowSDN CIO Summit 2010Nick McKeown & Guru Parulkar In collaboration with Martin Casado and Scott Shenker And contributions by many others
Executive Summary • The network industry is starting to restructure • The trend: “Software Defined Networks” • Separation of control from datapath • Faster evolution of the network • It has started in large data centers • It may spread to WAN, campus, enterprise, home and cellular networks • GENI is putting SDN into hands of researchers
Cellular industry • Recently made transition to IP • Billions of mobile users • Need to securely extract payments and hold users accountable • IP sucks at both, yet hard to change How can they fix IP to meet their needs?
Telco Operators • Global IP traffic growing 40-50% per year • End-customer monthly bill remains unchanged • Therefore, CAPEX and OPEX need to reduce 40-50% per Gb/s per year • But in practice, reduces by ~20% per year How can they stay in business? How can they differentiate their service?
Already happening Enterprise WiFi • Set power and channel centrally • Route flows centrally, cache decisions in APs • CAPWAP etc. Telco backbone networks • Calculate routes centrally • Cache routes in routers
Experiment: Stanford campusHow hard is it to centrally control all flows? 35,000 users 10,000 new flows/sec 137 network policies 2,000 switches 2,000 switch CPUs 2006
How many $400 PCs to centralize all routing and all 137 policies? Controllers Ethernet Switch Ethernet Switch Ethernet Switch Host B Host A Ethernet Switch [Ethane, Sigcomm ‘07]
Answer: less than one
If you can centralize control, eventually you will.With replication for fault-tolerance and performance scaling.
Million of linesof source code Billions of gates The Current Network Routing, management, mobility management, access control, VPNs, … Feature Feature 5900 RFCs Barrier to entry Operating System Specialized Packet Forwarding Hardware Bloated Power Hungry Vertically integrated Many complex functions baked into the infrastructure • OSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, … • Looks like the mainframe industry in the 1980s
Restructured Network Feature Feature Network OS Operating System Specialized Packet Forwarding Hardware Operating System Feature Feature Feature Feature Feature Feature Feature Feature Feature Feature Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware Operating System Specialized Packet Forwarding Hardware
2. At least one Network OSprobably many.Open- and closed-source 3. Well-defined open API The “Software-defined Network” Feature Feature 1. Open interface to packet forwarding Network OS OpenFlow Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
OpenFlow Basics Narrow, vendor-agnostic interface to control switches, routers, APs, basestations.
Step 1: Separate Control from Datapath Network OS OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
Step 2: Cache flow decisions in datapath “If header = x, send to port 4” Network OS “If header =y, overwrite header with z, send to ports 5,6” “If header = ?, send to me” Flow Table OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
Plumbing Primitives • Match arbitrary bits in headers: • Match on any header; or user-defined header • Allows any flow granularity • Actions: • Forward to port(s), drop, send to controller • Overwrite header with mask, push or pop • Forward at specific bit-rate Data Header e.g. Match: 1000x01xx0101001x
Control Path Control Path (Software) Data Path (Hardware)
OpenFlow Controller OpenFlow Protocol (SSL) Control Path OpenFlow Data Path (Hardware)
2. At least one Network OSprobably many.Open- and closed-source 3. Well-defined open API The “Software Defined Network” Feature Feature 1. Open interface to packet forwarding Network OS Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
Network OS Several commercial Network OS in development • Commercial deployments late 2010 Research • Research community mostly uses NOX • Open-source available at: http://noxrepo.org • Expect new research OS’s late 2010
Example: New Data Center Cost 200,000 servers Fanout of 20 10,000 switches $5k vendor switch = $50M $1k commodity switch = $10M Savings in 10 data centers = $400M Control More flexible control Quickly improve and innovate Enables “cloud networking” Several large data centers will use SDN.
Data Center Networks Existing Solutions • Tend to increase hardware complexity • Unable to cope with virtualization and multi-tenancy Software Defined Network • OpenFlow-enabled vSwitch • Open vSwitch http://openvswitch.org • Network optimized for data center owner • Several commercial products under development
What we are doing at Stanford • Defining the OpenFlow Spec • Check out http://OpenFlow.org • Open weekly meetings at Stanford • Enabling researchers to innovate • Add OpenFlow to commercial switches, APs, … • Deploy on college campuses • “Slice” network to allow many experiments
Isolated “slices” Feature Feature Feature Feature Network Operating System 1 Network Operating System 2 Network Operating System 3 Network Operating System 4 OpenFlow OpenFlow Virtualization or “Slicing” Layer Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding Packet Forwarding
FlowVisor Creates Virtual Networks FlowVisor PlugNServe Load-balancer OpenFlow Wireless Experiment OpenPipes Experiment OpenFlow Protocol OpenFlow Protocol Policy #1 OpenFlow Switch OpenFlow Switch OpenFlow Switch Multiple, isolated slices in the same physical network
Application-specific Load-balancing Goal: Minimize http response time over campus network Approach: Route over path to jointly minimize <path latency, server latency> Load-Balancer Internet “Pick path & server” Network OS OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch OpenFlow Switch
Intercontinental VM Migration Moved a VM from Stanford to Japan without changing its IP. VM hosted a video game server with active network connections.
Converging Packet and Circuit Networks Goal: Common control plane for “Layer 3” and “Layer 1” networks Approach: Add OpenFlow to all switches; use common network OS Feature Feature NOX OpenFlow Protocol OpenFlow Protocol WDM Switch IP Router WDM Switch IP Router TDM Switch [Supercomputing 2009 Demo] [OFC 2010]
ElasticTree • Goal: Reduce energy usage in data center networks • Approach: • Reroute traffic • Shut off links and switches to reduce power DC Manager “Pick paths” Network OS [NSDI 2010]
ElasticTree • Goal: Reduce energy usage in data center networks • Approach: • Reroute traffic • Shut off links and switches to reduce power DC Manager “Pick paths” Network OS X X X X X [NSDI 2010]
Executive Summary • The network industry is starting to restructure • The trend: “Software Defined Networks” • Separation of control from datapath • Faster evolution of the network • It has started in large data centers • It may spread to WAN, campus, enterprise, home and cellular networks • GENI is putting SDN into hands of researchers