1 / 26

Projects Related to Coronet

Projects Related to Coronet. Jennifer Rexford Princeton University http://www.cs.princeton.edu/~jrex. Outline. SEATTLE Scalable Ethernet architecture Router grafting (joint work with Kobus ) Seamless re-homing of links to BGP neighbors Applications of grafting for traffic engineering

sidone
Download Presentation

Projects Related to Coronet

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Projects Related to Coronet Jennifer RexfordPrinceton University http://www.cs.princeton.edu/~jrex

  2. Outline • SEATTLE • Scalable Ethernet architecture • Router grafting (joint work with Kobus) • Seamless re-homing of links to BGP neighbors • Applications of grafting for traffic engineering • Static multipath routing (Martin’s AT&T project) • Joint traffic engineering and fault tolerance

  3. SEATTLE Scalable Ethernet Architecture for Large Enterprises (joint work with Changhoon Kim and Matt Caesar) http://www.cs.princeton.edu/~jrex/papers/seattle08.pdf

  4. Goal: Network as One Big LAN • Shortest-path routing on flat addresses • Shortest paths: scalability and performance • MAC addresses: self-configuration and mobility • Scalability without hierarchical addressing • Limit dissemination and storage of host info • Sending packets on slightly longer paths H S H S H S H S S S S S S H H S S H H S S H

  5. SEATTLE Design Decisions * Meanwhile, avoid modifying end hosts

  6. Network-Layer One-hop DHT • Maintains <key, value> pairs with function F • Consistent hash mapping a key to a switch • F is defined over the set of live switches • One-hop DHT • Link-state routing ensures switches know each other • Benefits • Fast and efficient reaction to changes • Reliability and capacity naturally grow with size of the network 2128-1 0 1

  7. Location Resolution <key, val> = <MAC addr, location> y C x Owner Forward directly from D to A Host discovery Traffic to x A User HashF(MACx) = B Tunnel to B Hash F(MACx) = B D Tunnel to A Publish<MACx, A> Notify<MACx, A> B Resolver Switches Store<MACx, A> E End hosts Control message Data traffic

  8. Address Resolution <key, val> = <IP addr, MAC addr> BroadcastARP request forIPx y C x <IPx,MACx> A HashF(IPx) = B Unicastlook-up to B Hash F(IPx) = B D Unicast reply<IPx, MACx, A> B Store<IPx, MACx,A> E Traffic following ARP takes a shortest pathwithout separate location resolution

  9. Handling Network and Host Dynamics • Network events • Switch failure/recovery • Change in <key, value> for DHT neighbor • Fortunately, switch failures are not common • Link failure/recovery • Link-state routing finds new shortest paths • Host events • Host location, MAC address, or IP address • Must update stale host-information entries

  10. < x, D > < x, D > < x, D > Handling Host Information Changes Dealing with host mobility Oldlocation F Host talkingwithx x y A < x, A > C < x, A > New location Resolver B D < x, A > E < x, D> MAC- or IP-address change can be handled similarly

  11. Packet-Level Simulations • Large-scale packet-level simulation • Event-driven simulation of control plane • Synthetic traffic based on LBNL traces • Campus, data center, and ISP topologies • Main results • Much less routing state than Ethernet • Only slightly more stretch than IP routing • Low overhead for handling host mobility

  12. XORP Link-stateadvertisements ClickInterface NetworkMap OSPF Daemon User/Kernel Click RoutingTable Ring Manager Host InfoManager SeattleSwitch Data Frames Data Frames Prototype Implementation Host-info registrationand notification msgs Throughput: 800 Mbps for 512B packets, or 1400 Mbps for 896B packets

  13. Conclusions on SEATTLE • SEATTLE • Self-configuring, scalable, efficient • Enabling design decisions • One-hop DHT with link-state routing • Reactive location resolution and caching • Shortest-path forwarding • Relevance to Coronet • Backbone as one big virtual LAN • Using Ethernet addressing

  14. Router Grafting Joint work with Eric Keller, Kobus van derMerwe, and Michael Schapira http://www.cs.princeton.edu/~jrex/papers/nsdi10.pdf http://www.cs.princeton.edu/~jrex/papers/temigration.pdf

  15. Today: Change is Disruptive • Planned change • Maintenance on a link, card, or router • Re-homing customer to enable new features • Traffic engineering by changing the traffic matrix • Several minutes of disruption • Remove link and reconfigure old router • Connect link to the new router • Establish BGP session and exchange routes provider customer

  16. Router Grafting: Seamless Migration • IP: signal new path in underlying transport network • TCP: transfer TCP state, and keep IP address • BGP: copy BGP state, repeat decision process Send state Move link

  17. Prototype Implementation • Added grafting into Quagga • Import/export routes, new ‘inactive’ state • Routing data and decision process well separated • Graft daemon to control process • SockMi for TCP migration Unmod. Router Emulated link migration Graftable Router Modified Quagga graft daemon Handler Comm Quagga SockMi.ko click.ko Linux kernel 2.6.19.7 Linux kernel 2.6.19.7-click Linux kernel 2.6.19.7

  18. Grafting for Traffic Engineering Rather than tweaking the routing protocols… * Rehome customer to change traffic matrix

  19. Traffic Engineering Evaluation • Internet2 topology, and traffic data • Developed algorithms to determine links to graft Network can handle more traffic (at same level of congestion)

  20. Conclusions • Grafting for seamless change • Make maintenance and upgrades seamless • Enable new management applications (e.g., TE) • Implementing grafting • Modest modifications to the router • Leveraging programmable transport networks • Relevance to Coronet • Flexible edge-router connectivity • Without disrupting neighboring ISPs

  21. Joint Failure Recoveryand Traffic Engineering Joint work with Martin Suchara, DahaiXu, Bob Doverspike, and David Johnson http://www.cs.princeton.edu/~jrex/papers/stamult10.pdf

  22. Simple Network Architecture • Precomputed multipath routing • Offline computation based on underlying topology • Multiple paths between each pair of routers • Path-level failure detection • Edge router only learns which path(s) have failed • E.g., using end-to-end probes, like BFD • No need for network-wide flooding • Local adaptation to path failures • Ingress router rebalances load over remaining paths • Based on pre-installed weights

  23. Architecture • topology design • list of shared risks • traffic demands • fixed paths • splitting ratios t 0.25 0.25 s 0.5

  24. Architecture • fixed paths • splitting ratios t 0.5 0.5 s 0 link cut path probing 24

  25. State-Dependent Splitting • Custom splitting ratios • Weights for each combination of path failures configuration: at most 2#paths entries p1 0.4 0.6 0.4 p2 p3 0.2 0.4

  26. Optimizing Paths and Weights • Optimization algorithms • Computing multiple paths per pair of routers • Computing splitting ratios for each failure scenario • Performance evaluation • On AT&T topology, traffic, and shared-risk data • Performance competitive with optimal solution • Using around 4-8 paths per pair of routers • Benefits • Joint failure recovery and traffic engineering • Very simple network elements (nearly zero code) • Part of gradual move away from dynamic layer 3

More Related