310 likes | 421 Views
Bringing Experimenters to GENI with the Transit Portal. Vytautas Valancius, Hyojoon Kim, Nick Feamster Georgia Tech (with Jennifer Rexford and Aki Nakao). Talk Agenda. Motivation: Custom routing for each experiment Demonstration How you can connect to Transit Portal Experiment Ideas
E N D
Bringing Experimenters to GENI with the Transit Portal Vytautas Valancius, Hyojoon Kim, Nick FeamsterGeorgia Tech(with Jennifer Rexford and Aki Nakao)
Talk Agenda • Motivation: Custom routing for each experiment • Demonstration • How you can connect to Transit Portal • Experiment Ideas • Anycast • Service Migration • Flexible Peering • Using Transit Portal in Education • Example problem set • Summary and Breakout Ideas
Session Networks Use BGP to Interconnect Autonomous Systems Route Advertisement Traffic
Virtual Networks Need BGP Too • Strawman • Default routes • Public IP address • Problems • Experiments may needto see all upstream routes • Experiments may needmore control overtraffic • Need “BGP” • Setting up individualsessions is cumbersome • …particularly for transient experiments ISP 2 ISP 1 BGP Sessions GENI
Route Control Without Transit Portal • Obtain connectivity to upstream ISPs • Physical connectivity • Contracts and routing sessions • Obtain the Internet numbered resources from authorities • Expensive and time-consuming!
Route Control with Transit Portal Experiment Facility Internet ISP1 Virtual Router A Transit Portal Virtual Router B Experiment 1 ISP2 Full Internet route control to hosted cloud services! Routes Experiment 2 Packets
Connecting to the Transit Portal • Separate Internet router for each service • Virtual or physical routers • Links between service router and TP • Each link emulates connection to upstream ISP • Routing sessions to upstream ISPs • TP exposes standard BGP route control interface
Basic Internet Routing with TP • Experiment with two upstream ISPs • Experiment can re-route traffic over one ISP or the other, independently of other experiments ISP 2 ISP 1 BGP Sessions Traffic Transit Portal Virtual BGP Router Interactive Cloud Service
Current TP Deployment • Server with custom routing software • 4GB RAM, 2x2.66GHz Xeon cores • Three active sites with upstream ISPs • Atlanta, Madison, and Princeton • A number of active experiments • BGP poisoning (University of Washington) • IP Anycast (Princeton University) • Advanced Networking class (Georgia Tech)
Demonstration Setup Looking-glass Server Client network: 168.62.21.0/24 route-server.ip.att.net Transit Portal Virtual Router Traceroute VPN Tunneling GT (AS 2637) Public AS 47065 Private AS 65002 : BGP connectivity
Setting Up Peering with TP • Pick a device which will be the virtual router (Linux) • Request for needed resources & provide information • For tunneling: CA certificate, client certificate & key • Get prefixes that the client will announce • Make tunneling connection with Transit Portal • Set up BGP daemon in virtual router (e.g. Quagga) • Make proper changes to routing table if necessary • Check BGP announcements & connectivity (BGP table)... and you are good to go!
Experiment 1: IP Anycast • Internet services require fast name resolution • IP anycast for name resolution • DNS servers with the same IP address • IP address announced to ISPs in multiple locations • Internet routing converges to the closest server • Available only to large organizations
IP Anycast • Host service at multiple locations (e.g., on ProtoGENI) • Direct traffic to one instance of the service or another using anycast Asia North America ISP1 ISP2 ISP3 ISP4 Transit Portal Transit Portal Anycast Routes Name Service Name Service
Experiment 2: Service Migration • Internet services in geographically diverse data centers • Operators migrate Internet user’s connections • Two conventional methods: • DNS name re-mapping • Slow • Virtual machine migration with local re-routing • Requires globally routed network
Service Migration Asia Internet North America ISP1 ISP2 ISP3 ISP4 Transit Portal Transit Portal Tunneled Sessions Active Game Service
Experiment 3: Flexible Peering Hosted service can quickly provision services in the cloud when demand fluctuates.
Using TP in Your Courses • Used in “Next-Generation Internet” Course at Georgia Tech in Spring 2010 • Students set up virtual networks and connect directly to TP via OpenVPN (similar to demonstration) • Live feed of BGP routes • Routable IP addresses for in class topology inference and performance measurements
Example Problem Set • Set up virtual network with • Intradomain routing • Hosted services • Rate limiting • Connect to Internet with Transit Portal
Ongoing Developments • More deployment sites • Your help is desperately needed • Integrating TP with network research testbeds (e.g., GENI, CoreLab) • Faster forwarding (NetFPGA, OpenFlow) • Lightweight interface to route control
Conclusion • Limited routing control for hosted services • Transit Portal gives wide-area route control • Advanced applications with many TPs • Open-source implementation • Scales to hundreds of client sessions • The deployment is real • Can be used today for research and education • More information http://valas.gtnoise.net/tp
Breakout Session Agenda • Q & A • Demonstration Redux • Brainstorming Experiments • MeasuRouting: Routing-Assisted Traffic Monitoring • Pathlet Routing and Adaptive Multipath Algorithms • Aster*x: Load-Balancing Web Traffic over Wide-Area Networks • Migrating Enterprises to Cloud-based Architectures
Scaling the Transit Portal • Scale to dozens of sessions to ISPs and hundreds of sessions to hosted services • At the same time: • Present each client with sessions that have an appearance of direct connectivity to an ISP • Prevented clients from abusing Internet routing protocols
Conventional BGP Routing • Conventional BGP router: • Receives routing updates from peers • Propagates routing update about one path only • Selects one path to forward packets • Scalable but not transparent or flexible ISP2 ISP1 BGP Router Client BGP Router Client BGP Router Updates Packets
Scaling TP Memory Use • Store and propagate all BGP routes from ISPs • Separate routing tables • Reduce memory consumption • Single routing process - shared data structures • Reduce memory use from 90MB/ISP to 60MB/ISP ISP1 ISP2 Routing Process Routing Table 1 Routing Table 2 Virtual Router Virtual Router Bulk Transfer Interactive Service
Scaling TP CPU Use • Hundreds of routing sessions to clients • High CPU load • Schedule and send routing updates in bundles • Reduces CPU from 18% to 6% for 500 client sessions ISP1 ISP2 Routing Process Routing Table 1 Routing Table 2 Virtual Router Virtual Router Bulk Transfer Interactive Service
Scaling Forwarding Memory • Connecting clients • Tunneling and VLANs • Curbing memory usage • Separate virtual routing tables with default to upstream • 50MB/ISP -> ~0.1MB/ISP memory use in forwarding table ISP1 ISP2 Forwarding Table Forwarding Table 1 Forwardng Table 2 Virtual BGP Router Virtual BGP Router Bulk Transfer Interactive Service