1 / 32

Software Defined Networking

Computer Systems – July 2013. Software Defined Networking. Zurich University of Applied Sciences Lecturer : Philipp Aeschlimann e-mail : aepp @zhaw.ch Phone: 058/934 69 64 Office: TD 03.02 Summer School 2013 . Introduction / Goals.

elom
Download Presentation

Software Defined Networking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Systems – July 2013 Software Defined Networking Zurich University of Applied Sciences Lecturer: Philipp Aeschlimann e-mail: aepp@zhaw.ch Phone: 058/934 69 64 Office: TD 03.02 Summer School 2013

  2. Introduction / Goals • Software Defined Networking • Goals • Whatis Software Defined Networking • Howis SDN implemented • Exampleof an SDN-Controller (Ryu) • Knowwhat a Python Decoratoris

  3. Outline / Structure • Software Defined Networking • OpenFlow – An Implementation • RYU • Conclusion

  4. A newnetworkarchitecture • Current network architectures are commonly organized in a hierarchy • classical client-server structure • not really suited for mobile devices, server-virtualization and dynamic structures • Network schemas face new requirements • Handling of a hybrid cloud stack • Network forusers on-demand • Services are being offered in a cloud • Big dataandreliability

  5. Constraintsofcurrentnetworktechnologies • Complexity • replacement of network devices in a topology • what has to be done? • Consistent networking strategy • change of a policy or adjustment of the ACL • Scaling of networks • how to handle multiple clients (multi-tenancy) • how fast can a network grow? • Manufacturer dependency • often not given

  6. Constraintsofcurrentnetworktechnologies • Use case University • Separation of the network in a productive and an experimental one • Make the network available for multiple universities/schools (depending on theapplication) • DataCenter • highlyscalable, virtualnetworks • automated migration of virtual machines and the associated network(s) • Cloud • elasticy is an important component • offer (the) network as a service • Gameserver • follows

  7. SDN – a paradigm

  8. The Stanford Use-Case • Gameserver Video

  9. Fromparadigmtoimplementation • OpenFlow is “the implementation” of the SDN paradigm • OpenFlow itself is ”only” the protocol • Developed at Stanford University https://openflow.stanford.edu/dashboard.action • For a concrete implementation we need more then just a protocoll • ”OpenFlow ready hardware” • A centralized ”controller” • Network logic inside the controller • The freedom of choice, we all love it! • Which network device(s) should I buy? • Which controller should be used? • Protocol and centralized approach • Who develops the network logic?

  10. The Controller Each existing controller has its own strengths • Floodlight • Written in Java and offering a nice WebGUI. Lots of plugins available • NOX (POX) • The OpenFlow reference controller written in C/C++, for now the fastest implementation • POX is the developer controller written in python and offering the exact same API as NOX • Trema • a controller written in ruby, with native C extensions for ruby • Beacon • a fork of floodlight with better multithreading implementation • NodeFlow • promising and uprising controller with node.js as baseline asset

  11. Development tools • There are multiple development tools • DPCTL • a tool to administer data paths • can also edit flow-table entries • is not a replacement for a controller • mininet • a software to create virtual networks, especially for SDN • mininet lets you easily create large virtual infrastructures • iperf • a tool to measure the throughput of network devices • cbench • Benchmarking tooltogeneratetraffic

  12. Use Case walkthrough

  13. A packet in thenetwork • h2 want to check if h3 is reachable: h2 ping -c1 h3 • The packet reaches s1 but s1 doesn't know what to do with it • s1 sends a packet to c0. c0 can then make an intelligent decision • c0 now knows h2's port but not the port from h3 • s1 will now perform a “flood” • h3 receives a packet and will send an answer • s1 receives this packet and will ask c0 again what to do with the packet • no more flooding is necessary since c0 already knows on which port h2 is connected • at the same time, c0 learn on which port h3 is connected to

  14. The Flow Table • In the process just described, every single time the controller has to be asked/contacted • not sensible, since we have network devices with multicore CPU's • It would be better if the switch could decide autonomously • in OpenFlow every switch has its own FlowTable • Who fills up this FlowTable? Who creates entries in this FlowTable? • The controller - following some programmable logic

  15. Mininet • The usage of mininet # mn --topo single,3 --mac --switch ovsk --controller remote • This command creates the following SDN network: • 3 virtual hosts with their own IP addresses • A software-switch (kernel) with 3 ports • Connects every host to this software switch • The MAC addresses of the hosts are identical to the IP addresses • The OpenFlow switch is able to connect with a remote controller • After that you are in the mininet console • In this console additional commands can be issued (related to hosts)

  16. Mininet Additional Mininetcommands (onlyoverMininetconsole!) mininet> nodes Lists all Mininethosts mininet> h2 ifconfig Well-known commands can be individually executed on hosts mininet> xterm h2 h3 Open a console (xterm) for host 2 and host 3 # mn -c Resets the current Mininet instance

  17. dpctl dpctl is a tool for development and maintenance, nothing else # dpctl show tcp:127.0.0.1:6634 Show basic information about the topology # dpctl dump-flows tcp:127.0.0.1:6634 Show the FlowTable of switch 127.0.0.1 via Port 6634 # dpctladd-flow tcp:127.0.0.1:6634 in_port=1,actions=output:2 # dpctladd-flow tcp:127.0.0.1:6634 in_port=2,actions=output:1 Create a permanent entry in the FlowTable of 127.0.0.1

  18. RYU – An OpenFlow Controller • RYU means Flow • Started by NTT laboratories OSRG group • Very good implementation in OpenStack - quantum/Neutron • A controller is NOT a Framework • Although the controller offers abstract functions/functionality • The programming of the controller is a central task for network administrators • All principles software design apply here • This offers all the same advantages (e.g. reusability of code) • RYU itself consists of 3 parts (very rough)

  19. RYU – Structure The 3 parts of RYU are: • Applications • Differentiation between stock applications and user applications • Applications can be defined at the start or can be loaded/included and used at runtime • RYU API • Offers basic/central functions to develop network-control applications • The RYU API is used by a programmer, but not extended • Self-made API's can be developed • Applications contain/use the API • The OpenFlow implementation itself • The controller has to understand the OpenFlow protocol • Implemented as a python module in the package ofproto

  20. RYU – Components • A programmed RYU controller uses multiple applications • The collection of all used applications defines the network-functionality • simple_switch.py • Basic Layer-2 switch application • Only exact hits/matches of packets on FlowTable entries are created • Simple_vlan.py • This application provides VLAN separation with ovs tag function. • rest_firewall.py • Rule based firewall application • Can use the matching structure from OpenFlow to filter traffic • And many more...

  21. RYU – API • The RYU API can be viewed as a collection of helper functions • This helper functions are used to program the applications • The core object offers most of such helper functions • Is being used as follows (RYU Convention) from ryu.base.app_manager import RyuApp • Additional useful helper functions are from ryu.lib import dpid as dpid_lib • For working with datapaths

  22. RYU – API • Additional helpers from ryu.lib import * • Most RYU applications work with packets • Every packet has some kind of header • Additionally there is also a payload • ethernet, TCP or IP • The OpenFlow implementation is actually a application and in itself the most important helper from ryu.ofproto import ofproto_v1_3 • That’s how OpenFlow specific functions like the communication with a device can be used • Ofproto_v1_3 is a python module

  23. RYU – OpenFlow Events • The controller can listen to several different events • Most events occur / are triggered when a message arrives at a switch • Every OpenFlow event has the following important attributes (in RYU) • connection: A connection ressourceto a device • datapath: The datapath ID • msg: The Openflow message object which triggered the event • All Events are having “Event” as prefix by convention • Not in this slides! OFPSwitchFeatures: • This event will be triggered as soon as the connection between switch and controller is established

  24. RYU – OpenFlow Events Other events: OFPGetConfigReply(Event): • Is triggered when the switch sends it’s configuration flags OFPPortStatus(Event): • Is triggered when the status of a port changes OFPFlowRemoved(Event): • Is triggered when a FlowTable entry is removed or “dies”

  25. RYU – OpenFlow Events The arrival of a packet at a switch is one of the most important events OFPPacketIn: • This event will only be triggered, if there doesn't already exist a FlowTable entry - “table-miss” • The controller allows to influence an event • To do that, a python decorator for the method is used @set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER) def_packet_in_handler(self, ev): • ev: OpenFlow PacketIn object • self: The component/class itself

  26. RYU – Python Decorators • The RYU project uses python-decorators extensivly • There is no difference, between a decorator and a python decorator except the syntax • The „@“ is used for this • Using a python decorator is easy, writing one a bit harder • Used in RYU in the sense of: • Decorate a method with an OFP-Event • Decorate a method with a OFP message header • Decorate a method with common OFP protocollatributes • Example at the board!

  27. RYU – OpenFlow Messages The controller uses OpenFlow messages to communicate with the switch • Thats the way instructions are sent to the switch • The following methods are all from the RYU ofproto module OFPActionOutput • Instructs a device to send a particular packet OFPFlowMod • Instructs a device to modify an entry in the FlowTable OFPMatch • The detection of matches is a central component • Patterns for the FlowTable are created on this basis

  28. RYU – Installing a FlowTableentry The following steps are necessary to install (add) a FlowTable entry: • Creation of a “match” object with parser.OFPMatch • Define a resulting action • This happens with parser.OFPActionOutput • Assign both objects to a FlowMod message, parser.OFPFlowMod • Now send it with the send_msg method from the datapath • Check the wildcards variable

  29. RYU – structureof an application • Define the class: class SimpleSsSwitch(app_manager.RyuApp): • Constructor of the : def __init__(self, *args, **kwargs): • allocating memory for the MAC addresses (array, DB) • Event Packet in: def _packet_in_handler(self, ev): • this method decides, which action the controller will take • Switch logic: create further methods that are holding logic • save MAC address • opt. FlowTable entry • deliver packet (directly to receiver or via “flood”)

  30. Whatis a learningswitch • A learning switch learns port to host association autonomously • works without use of spanning-tree algorithm • therefore system needs some time before it becomes performant • The principle is the same as on the slide “A packet in the network” • Difference: MAC addresses are stored in an array • a DB could be used as well • A learning switch could be useful • BUT a network using the classical spanning-tree is more performant

  31. OpenFlow – Peak alreadyreached? • Google adapted its WAN network to OpenFlow • used controller is unknown resp. modified • OpenFlow and Open Network Foundation (ONF) are widely supported / implemented • OpenFlow is constantly being improved/developed even though the specifications are clearly set • Open questions • forwarding in mixed networks • API for the applications • QoS supported but could be improved • If the big players in the networking-business will support OpenFlow is not yet clear / doubtful

  32. Reference Information https://www.opennetworking.org/images/stories/downloads/ whitepapers/wp-sdn-newnorm.pdf http://www.openflow.org/documents/openflow-wp-latest.pdf http://www.openflow.org/wk/index.php/Main_Page https://www.opennetworking.org/ https://openflow.stanford.edu/display/ONL/POX+Wiki

More Related