210 likes | 289 Views
Prototype Deployment of Internet Scale Overlay Hosting Status Report – 11/2009.
E N D
Prototype Deployment of Internet Scale Overlay HostingStatus Report – 11/2009 Patrick Crowley and Jon Turnerand John DeHart, Mart Haitjema Fred Kuhns, Jyoti Parwatikar, Ritun Patney, Charlie Wiseman, Mike Wilson, Ken Wong, Dave ZarComputer Science & EngineeringWashington Universitywww.arl.wustl.edu
Project Objectives Control Processor External Switch CP GPE GPE NPE • Deploy five experimental overlay hosting platforms • located at Internet 2 PoPs • compatible with PlanetLab, moving to GENI control framework • performance characteristics suitable for service deployment • integrated system architecture with multiple server blades • shared NP-based server blades for fast-path packet processing • Demonstrate multiple applications netFPGA ExternalSwitch Chassis Switch General Purpose Processing Engines Network Processing Engine Line Card Line Card Chassis Switch 10x1 GbE
Target Internet 2 Deployment 2 2 2 1 2 1 2
Status • First three nodes deployed and operational • installation went smoothly (mostly) • two hardware issues • replaced KC linecard, still need to replace one GPE at SALT • Demonstration this afternoon • Ready for a few friendly users • still need to complete flow stats • version 1 of NPE software • not yet ready to add fastpath code options • but happy to start working with interested groups • Tutorial planned for next GEC • let me know if you’re interested
Current Deployment I2 Internet Service I2 Router I2 Router I2 Router .198.204 .182.186 .214.218 64.57.23.194 64.57.23.178 64.57.23.210 SPPLC SPP SPP SPP WASH SALT KANS drn06.arl.wustl.edu ProtoGENI ProtoGENI ProtoGENI I2 Optical Connections
Hosting Platform Details VM VM . . . CP GPE GPE NPE PLOS netFPGA General PurposeProcessing Engine ExternalSwitch Chassis Switch Line Card filter filter Line Card queues lookup parse headerformat . . . . . . 10x1 GbE ... ... ... Network Processing Engine
Demonstrations this Afternoon • IPv4 network using NP-based fastpaths • traffic generated from PlanetLab nodes • demonstrates dynamic changes to installed filters • script in GPE slice issues control messages to modify filters • handling of exception cases by GPE slice • use of traffic monitoring • Forest overlay network – GPE-only implementation • multiple dynamic multicast groups over per-session,tree-structured communication channels • traffic generated from PlanetLab nodes • traffic monitoring using statistics data generated byGPE slice
IPv4 Demo KANS-R1 SALT-R2 WASH-R3 CP GPE GPE NPE FP CTL CP GPE CP GPE GPE NPE GPE NPE FP CTL CTL FP Line Card Line Card Line Card SPPLC Planetlab trafficgenerators experiment control
Forest Overlay Network client accessconnection • Focus on real-time distributedapplications • large online virtual worlds • distributed cyber-physical systems • Large distributed sessions • endpoints issue periodic status reports • and subscribe to dynamically changing sets of reports • requires real-time, non-stop data delivery, even as communication pattern changes • Per-session communication channels (comtrees) • unicast data delivery with route learning • dynamic multicast subscriptions using multicast core server overlayrouter comtree
Forest Demo KANS-R1 comtree1 R1 R2 SALT-R2 WASH-R3 CP GPE GPE NPE fRtr comtree2 CP GPE CP GPE GPE NPE GPE NPE R3 R3 fRtr R1 R2 fRtr Line Card Line Card Line Card SPPLC multicastsender multicastsender multicastsubscribers multicastsubscribers multicastsender multicastsubscribers experiment control Planetlab nodes
Developing a new SPP Application • Create reference model as foundation • work out functional issues • develop and debug control code • build with separate control and datapath processes • implement in Open Network Lab or Emulab • Port reference model to SPPs using GPEs only • initially, just reproduce reference model functionality • extend to use multiple external interfaces, reserved capacity • Design new NPE code option • we’ll work with you and do NPE code development • need working GPE version and fastpath design spec
Working with SPPs • Define new slice using SPPLC • much as with PlanetLab • Login to GPE slice to install application code • Reserve resources needed for your experiment • includes interface bandwidth on external ports and NPE fastpath resources • To run experiment during reserved period • configure reserved resources as needed • setup logical endpoints (aka meta-interfaces) • configure fastpath (filters, queues) • setup real-time monitoring • run application and start traffic generators
Web Site and SPPLC http://wiki.arl.wustl.edu/index.php/Internet_Scale_Overlay_Hosting drn06.arl.wustl.edu
Steps in Setting Running Experiment • Reserve resources in advance • At start of session • configure logical interfaces (aka meta-interfaces) • associate with physical interfaces and port numbers • assign bandwidth • setup fastpath • assign queues to logical interfaces, set queue parameters • install fastpath filters • setup traffic monitoring • At end of session • release resources in use • cancel any remaining reservation time
Command Line Tools • resrv– reserve resources in advance • make_resrv,update_resrv,cancel_resrv,get_resrv,... • client– perform variety of operations • get_ifaces ,get_ifattrs • get_queue_params, set_queue_params, bind_queue • alloc_endpoint, alloc_tunnel • write_fltr, update_result, get_fltr_bykey • read_stats, create_periodic, get_queue_len • create-fp– instantiate fastpath • fltr– manipulate fastpath filters for IPv4 code opt • sliced – monitor traffic and display remotely
Reserving Resources in Advance • resrv --cmdmake_resrv --xfileresFile <spp> <rsvRecord> <rDate start="20091117140000" end="20091117160000" /> <fpRSpec name=“jstFastpath"> <fpParams> <copt>1</copt> <bwspec firm="1000000" soft="0" /> <rcntsfltrs="100" queues="10" buffs=“1000" stats=“20" /> ... </fpParams> <ifParams> <ifRecbw=“20000" ip="64.57.23.194" /> ... </ifParams> </fpRSpec> <plRSpec> <ifParams> <ifRecbw="10000" ip="64.57.23.194" /> </ifParams> </plRSpec> </rsvRecord> </spp> reserved time period aggregate fastpath bandwidth # of TCAM filters, queues, packet buffers, traffic counters bandwidth on external interfaces, specified by IP address external interface bw for GPE slice
Configuring Logical Interfaces setup externally visible port • For use by GPE slice • client --cmdalloc_endpoint --bw 1000 --ipaddr 64.57.23.210 --proto 6 --port 3551 • For use by fastpath • client --cmdalloc_udp_tunnel --fpid 0 --bw 16000 --ipaddr 64.57.23.210 --port 20001 in Kb/s address of interface 6 for TCP, 17 for UDP port number identifier of fastpath within this slice
Configuring Fastpath • Binding queues to logical interfaces • client --cmdbind_queue --fpid 0 --miid 1 --qid_list_type 0 --qid_list 4 --qid_list 6 • Setting queue parameters • client --cmdset_queue_params --fpid 0 --qid 3 --threshold 1000 --bw 8000 • Setting up filters (for IPv4 fastpaths) • fltr --cmdwrite_fltr --fpid 0 --fid 10 --key_type 0 --key_rxmi 2 --key_daddr xx --mask_daddr xx --key_dport xx –-txdaddr xx --txdport xx --qid 5 meta-interface id assigned queues
Displaying Real-Time Data GPE CP NPE GPE SCD-N Line Card sliced • Fastpath maintains traffic counters and queue lengths • To display traffic data • configure an external TCP port • run sliced within your vServer, using configured port • /sliced --ip 64.57.23.194 --port 3552 & • on remote machine, run SPPmon.jar • use provided GUI to setup monitoring displays • SPPmon configures sliced, which polls the NPE running the fastpath • SCD-N reads and returns counter values to sliced • can also display data written to file within your vServer SPPmon
Summary • SPPs ready for use • start with a few friendly users, but will open up soon • tutorial at next GEC • To do list • complete flow stats subsystem • version 2 of NPE software • higher performance and multicast support • resource allocation mechanisms • enable coordinated reservation across all nodes • work with user groups on new code options