1 / 99

MobilityFirst Project Update NSF Meeting March 11, 2013

MobilityFirst Project Update NSF Meeting March 11, 2013. D. Raychaudhuri WINLAB, Rutgers University ray@winlab.rutgers.edu. Introduction : Progress Highlights. MobilityFirst project now moving from design phase to experimental evaluation and GENI/EC2 deployment

lamont
Download Presentation

MobilityFirst Project Update NSF Meeting March 11, 2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MobilityFirst Project UpdateNSF MeetingMarch 11, 2013 D. Raychaudhuri WINLAB, Rutgers University ray@winlab.rutgers.edu

  2. Introduction: Progress Highlights • MobilityFirst project now moving from design phase to experimental evaluation and GENI/EC2 deployment • Highlights of recently completed work: • Architecture is now stable – clarification of named-object/GUID narrow waist and application to specific use cases; comparisons with other FIA schemes • Two alternative GNRS designs completed and evaluated with prototype deployment starting on EC2 and GENI • Intra-domain routing complete (GSTAR); 2 alternative inter-domain routing designs completed; ongoing evaluation and prototyping • Evaluation of routing technology options: software, OpenFlow, NetFPGA • Security and privacy analysis for key MF protocols – GNRS, routing, .. • Compute layer for plug-in extensions/in-network processing designed and implemented • Detailed designs and prototyping for mobile, content and M2M use cases • Network management client-side design and demo; ongoing work on overall NMS capabilities • System-level prototyping of MF protocol stack and GENI deployment with real-world end-users and applications

  3. Introduction: Meeting Agenda Draft Agenda

  4. Architecture UpdateArunVenkataramani

  5. From Design Goals to Current Architecture Global name service Name certification Name resolution Content storage & retrieval Context & M2M services Service migration Computing layer Segmented transport Inter-,intra-domain routing Management plane Key insight: Logically centralized global name service enhances mobility, security, and network-layer functions Host + network mobility No global root of trust Intentional data receipt Proportional robustness Content-awareness Evolvability

  6. Architecture: Global name service Global name service Name certification human_readable_name GUID Name resolution: Auspice, DMap “Darleen Fisher’s phone”  1A348F76 Content storage & retrieval self-certifying GUID = hash(public-key) permits bilateral authentication Context & M2M services Service migration GUID flexibly identifies principals: interface, device, person, group, service, network, etc.

  7. Architecture: Global name service GUID  NA Global name service Name certification resolve(GUID) Name resolution: Auspice, DMap GUIDNA2 GUIDNA1 GUIDNA1 GUIDNA2 Content storage & retrieval Context & M2M services Service migration data NA1 NA2

  8. Global name service: Content retrieval Global name service Name certification Name resolution: Auspice, DMap Content storage & retrieval Context & M2M services Service migration

  9. Global name service: Content retrieval GNRS CGUID CGUID get(CGUID, NA1) [NA1,NA2,…] NA1 CGUID CGUID CGUID NA2 CGUID CGUID get(CGUID) • Content CGUID  [NA1, NA2, … ] • Opportunistic caching + request interception

  10. Global name service: Content retrieval Global name service Name certification Name resolution: Auspice, DMap Content storage & retrieval Context & M2M services Service migration

  11. Indirection and grouping Indirection and grouping enable context-aware services, content mobility, and group mobility Indirection: D1  D2 Grouping: D  {D1, D2, …, Dk}

  12. Indirection+grouping: Context-awareness GNRS CAID1members(CAID){T1, T2, …, Tk} T1 send_data(CAID,T1) CAID {T1,T2,…,Tk} T2 send_data(CAID,T2) send_data(CAID,T3) Tk GUID_cabi [T1, {“type””yellowcab”, “geo””Times Sq.”}] At source: CAID {T1, T2, …, Tk} // terminal networks At terminal n/w: CAID  {members(CAID) | Ti} // late binding

  13. From Design Goals to Current Architecture Global name service Name certification Name resolution Content storage & retrieval Context & M2M services Service migration Inter-,intra-domain routing Computing layer Segmented transport Management plane Key insight: Logically centralized global name service enhances mobility, security, and network-layer functions Host + network mobility No global root of trust Intentional data receipt Proportional robustness Content-awareness Evolvability

  14. Architecture: Scaling interdomain routing • Function: Route to GUID@NA • Scale: Millions of NA’s  huge forwarding tables send(GUID@NA, data) … … … NA2 … … NA3 … … … … … … … … NA1 … … … … … … …

  15. Architecture: Scaling interdomain routing • Function: Route to GUID@NA scalably • Approach: Core and edge networks to reduce state Global name service GUID [X2,T4] T4 T5 T1 T2 T3 T6 GUID X2,T4 data • Few interdomain routing design efforts maturing • Vnode + pathlet routing + link-state + telescoping updates • Bloom routing • Core-edge routing with *-cast through name service X1 X2 X3

  16. From Design Goals to Current Architecture Global name service Name certification Name resolution Content storage & retrieval Context & M2M services Service migration Inter-,intra-domain routing Computing layer Segmented transport Management plane Key insight: Logically centralized global name service enhances mobility, security, and network-layer functions Host + network mobility No global root of trust Intentional data receipt Proportional robustness Content-awareness Evolvability

  17. Virtual Service Provider Content Caching Privacy routing CPU Computing layer Storage Packet forwarding/routing anon anon ...... ...... Architecture: Computing layer • Programmable computing layer enables service flexibility and evolvability • Routers support new network services off the critical path • Packets carry (optional) service tags for demuxing • Integration with “active” GUID resolution in global name service

  18. From Design Goals to Current Architecture Global name service Name certification Name resolution Content storage & retrieval Context & M2M services Service migration Inter-,intra-domain routing Computing layer Segmented transport Management plane Key insight: Logically centralized global name service enhances mobility, security, and network-layer functions Host + network mobility No global root of trust Intentional data receipt Proportional robustness Content-awareness Evolvability

  19. From Design Goals to Current Architecture Global name service Name certification Name resolution Content storage & retrieval Context & M2M services Service migration Inter-,intra-domain routing Computing layer Segmented transport Management plane Key insight: Logically centralized global name service enhances mobility, security, and network-layer functions Host + network mobility No global root of trust Intentional data receipt Proportional robustness Content-awareness Evolvability

  20. Architecture: Why logically centralized? Indirection-based Logically centralized Network-layer

  21. Auspice GNRSArunVenkataramani, Emmanuel Cecchet

  22. Global name service as geo-distributed key-value store Global name service GUID: { {NAs:[[X1,T1],[X2,T2],…}, {geoloc:[lat, long]}, {TE_prefs: [“prefer WiFi”,…]}, {ACL: {whitelist: […]}}, … } resolve(GUID,…) value(s) resolve(GUID,…) value(s)

  23. Auspice design goals Low response time: Replicas of each name’s resolver should be placed close to querying end-users Low update cost: Number of resolver replicas should be limited to reduce replica consistency overhead Load balance: Placement of replicas across all names should prevent load hotspots at any single site Availability: Sufficient number of replicas so as to ensure availability amidst crash or malicious faults Consistency: Each name resolver’s consistency requirements must be preserved

  24. Trade-offs of traditional approaches • Replicate everything everywhere: • + Low response times • - High update cost under mobility, load imbalance • Few primary replica plus edge caching: • + Low update bandwidth cost • - Consistency requirements may limit caching benefits • - Load balance vs. response time trade-offs • Consistent hashing with replication • + Good load balance • - High response times (randomization, locality at odds) • - Dynamic replication, consistency coordination, load balance

  25. Auspice resolver replica placement Locality-aware Load-aware

  26. Auspice resolver placement engine X X Replicacontrollers Mapping algorithm + Paxos to compute active replica locations X Migrate replicas Load reports X X X X X X X Active replicas Locality-aware, load-aware, consistent First request for name X Typical request for name X to nearby active replica End-hosts or local name servers

  27. Auspice service migration (in-progress) Paxos Paxos create_replica(.) shutdown_replica(.) migrate_replica(.) report_load(.) Sequential consistency Lineariazability America Europe Asia Paxos Paxos

  28. Auspice implementation & evaluation • Implemented mostly in Java (~22K lines of code) • Supports mysql, MongoDB, Cassandra, in-memory store • HTTP API for request/responses • Flexible keys and values • [GUID, NA], [GUID, IP], [name, IP] • Near-beta version deployed on eight geo-distributed Amazon EC2 locations • Extensive evaluation on larger clusters and PlanetLab settings • Mobile socket library for seamless mid-session client and server migration

  29. Auspice vs. alternate proposals

  30. Auspice vs. commercial managed DNS

  31. Application scenario: Emergency geo-cast Demo by Emmanuel Cecchet

  32. Global Name Resolution Services Through Direct Mapping (DMAP)Yanyong Zhang

  33. Name-Address Separation Through GNRS Sue’s_mobile_2 Server_1234 Media File_ABC Taxis in NB Sensor@XYZ John’s _laptop_1 Host Naming Service Sensor Naming Service Context Naming Service Content Naming Service Globally Unique Flat Identifier (GUID) Global Name Resolution Service Network • Globally unique name (GUID) for network attached objects • device, content, context, AS name, sensor, and so on • Multiple domain-specific naming services • Global Name Resolution Service for GUID  NA mappings • Where to store these mappings? Net2.local_ID Network address Net1.local_ID

  34. Direct Mapping (DMAP) GUID (00101100……10011001) Hash Function IP IPx = (44.32.1.153) (-) (+) Strictly 1-overlay-hop lookup No extra routing requirement(e.g. utilize current BGP) IP “hole” issues Limited locality

  35. Fixing IP Holes for IPv4 Map of IP (/8) address space (white = unassigned addresses) Value at m=10 is 0.0009 • Fixing IP Holes: • If hash of GUID falls in the IP hole, rehash that IP m times to get out of the hole • Lookup follows the same process to find GUID

  36. Fixing IP Holes for General Network Addressing Schemes (2, 2) (1, 2) (2, 1) (1, 1) network address space used segments unused segments • In a general network addressing scheme, we can have more holes than used segments (e.g., IPv6) • Used address segments are hashed into N buckets • a two-level index: (bucket ID: segment ID) • Mapping GUID to NA • H1(GUID)  bucket ID • H2(GUID)  segment ID within a bucket

  37. Mapping Replication (00101100……10011001) GUID k Hash Functions K=2 IPx = (44.32.1.153) IP K=3 IP IPx = (67.10.12.1) IP IPx = (8.12.2.3) K=1 Every mapping is replicated at K random locations Lookups can choose closest among K mappings. Much reduced lookup latencies

  38. Capturing Locality Local replica AS 1 GUID 10 AS 200 AS 5 K=1 K=3 AS 101 K=2 • Spatial locality: GUIDs will be more often accessed by local nodes (within the same AS) • Solution: Keep a local replica of the mapping • A lookup can involve simultaneous local lookup and global lookup • LNRS and GNRS

  39. Simulation Results – Query Latencies

  40. Evaluation: Tomorrow’s Internet 20% more nodes in all 6 levels Double the nodes in the 4 levels • AJellyfish model • captures each AS’s distance to the core • Tomorrow’s Internet • More and larger ASs • Flatter topology

  41. Plural RoutingZ. Morley Mao

  42. Plural routing design: routing table reconstruction

  43. Plural routing simulation-based evaluation • Scalability: routing table size • Flexibility: avoiding a domain • Flexibility: load balancing • The maximum routing table size < 5MB • 99% of routing tables < 100KB • A single bloom filter is 1KB to guarantee zero false positive • MIRO is 64%, and plural routing is 69-70% • Most of the rest 30% cannot be avoided. • The disjointness between the alternate route and the default one is high • Close to the optimal

  44. Plural routing prototyping • Implementation • Evaluation • Platform • Click router • Naming service (GNRS?) • Steps • Re-construct routing tables using bloom filters, keep the rest of BGP • Take RouteViews BGP traces as the input to a Click router • Perform routing looking up and update routing table accordingly • Milestones • ORBIT: test on single router • GENI: test on a couple of POPs • Scalability • Routing lookup • Routing table size • Path inflation • Flexibility • Avoiding intermediate domains • Load balancing

  45. Edge-Aware Inter-Domain RoutingTam Vu, D. Raychaudhuri

  46. Edge-aware Inter-domain Routing (EIR) • Provide network level support for: • Robust message delivery to unstable mobile edge networks • Using in-network name-to-address mapping and storage aware routing • Flexible path selection for multi-homing, multi-path, multi-network operations • Providing a full view of network graph with link-state protocol • Efficient multicast and any-cast • Using naming resolution service for membership management • Service-defined message delivery • Service ID –based routing and forwarding

  47. EIR Mechanisms • Abstracting network entities to increase network visibility • Aggregation nodes (aNode) and virtual link(vLink) • ASes distribute NSP(Network state packets): Information about internal network states and links to its neighbors (link-state protocol) • Telescopic routing updates to reduce dissemination overhead • Controlling the NSP update rate as a function of distance-to-originating AS • Late name-to-address binding • Router has capability of rebinding <GUID=>Address> for packets in transit

  48. EIR Prototyping • Click-based prototyping on Orbit nodes • Implementation on 200+ nodes on the grid • Evaluate: Packet loss rate, throughput, good put, lookup delay, stretch, routing stable size, etc. EIR Click router OSPF w. Telescoping Link state advs RIB NSP SID 3 GNRSd Binding request SID 2 SID 1 NextHop Table EIR forwarding engine Data Packet Data Packets

  49. EIR Prototyping(2) Sender EIR Routers EIR Routers EIR Routers EIR Routers EIR Routers • Click-based prototyping on Orbit nodes • Message delivery with late binding • Storage-aware routing • Efficient multicast & multipath data delivery Mobile Trajectory Receiver

  50. PacketCloud Compute LayerYang Chen, Xiaowei Yang

More Related