1 / 24

MOve: an application- M alleable Ove rlay

MOve: an application- M alleable Ove rlay. UIUC / INRIA Collaboration. Disclaimer. Context of this work: Work done during our collaboration with Urbana-Champaign Indranil Gupta & Ramsés Morales Side work. Why another overlay ?. Structured overlays Chord KaZAa Unstructured overlays

ozzy
Download Presentation

MOve: an application- M alleable Ove rlay

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MOve: an application-Malleable Overlay UIUC / INRIA Collaboration GDS meeting - LIP

  2. Disclaimer • Context of this work: • Work done during our collaboration with Urbana-Champaign • Indranil Gupta & Ramsés Morales • Side work GDS meeting - LIP

  3. Why another overlay ? • Structured overlays • Chord • KaZAa • Unstructured overlays • Gnutella • Swim GDS meeting - LIP

  4. Targeted applications • Group-based applications • Distributed white board • Gaming platform • Replication service • … • Nodes within subgroups will interact GDS meeting - LIP

  5. Example: a gaming platform GDS meeting - LIP

  6. Needed properties • Connectivity • Nodes should be able to communicate with others • Efficient updates • Within a group nodes share a common state • Volatility resilience • Both at global and subgroup levels GDS meeting - LIP

  7. Who knows whom ? • Every one knows every one • Not scalable !!! • Only a partial view of the system • Who knows whom relation <=> an overlay • Ideally • Stay connected • Support for fault tolerance • Related node should be close in the overlay GDS meeting - LIP

  8. Random graph benefits • Theoretical results • The graph will stay connected if there are more than log(n)links per peer (where n is the overall number of peers in the system) • Goal • To keep connectivity => try to stay close to random graphs GDS meeting - LIP

  9. Non-application links • Take advantage of random graphs • A subset of the links are “random” • Weight according to the Round Trip Time -> taking the underlying topology into account • Use “swim” algorithms GDS meeting - LIP

  10. Application links • To take into account application groups • Create links between peers belonging to a same group • New links • Replacing non-application links • Sharing application links GDS meeting - LIP

  11. Sharing a same space GDS meeting - LIP

  12. Replacement policy • If there is room enough an no link exist -> link creation • If the node has resources enough -> link creation • (else) drop a non-applicationlink, or change a non-application link to an application one GDS meeting - LIP

  13. What happens when a node joins ? GDS meeting - LIP

  14. Random walk • A mechanism to get new neighbor • Called periodically • To avoid pathological topologies • For fault tolerance • To increase the clustering degree GDS meeting - LIP

  15. The random-walk mechanism GDS meeting - LIP

  16. Simulation • UIUC-INRIA_SIM • A discrete event simulator • ~ 5000 lines of java code • Using the GT-ITM topology generator Kenneth L. Calvert, Matthew B. Doar, and Ellen W. Zegura. Modeling Internet topology.IEEE Communications Magazine, 35(6):160ミ163, June 1997. GDS meeting - LIP

  17. Evaluation: Clustering coefficient (random graph…) GDS meeting - LIP

  18. Evaluation:Connectivity GDS meeting - LIP

  19. Evaluation:Controlled clustering GDS meeting - LIP

  20. Evaluation:Link sharing benefit (1) GDS meeting - LIP

  21. Evaluation:Twisting the overlay GDS meeting - LIP

  22. Evaluation:Resilience to failures GDS meeting - LIP

  23. Conclusion • MOve: a malleable overlay • Nodes remain connected • Strong connections within subgroups • High volatility resilience • Paper submited to DSN 2006 GDS meeting - LIP

  24. Link with replication… • Far from JuxMem • BUT • Can be use for replication • Greater scale • Smaller warranties GDS meeting - LIP

More Related