200 likes | 393 Views
Tapestry: A Resilient Global-Scale Overlay for Service Deployment. Ben Y.Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph Yong Song ( ysong@sslab.kaist.ac.kr ). Contents. Introduction Tapestry Routing and Object-Locating Scheme Node Insertion Node Deletion
E N D
Tapestry: A Resilient Global-Scale Overlay for Service Deployment Ben Y.Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph Yong Song ( ysong@sslab.kaist.ac.kr )
Contents • Introduction • Tapestry • Routing and Object-Locating Scheme • Node Insertion • Node Deletion • Architecture of infrastructure • Experimental Result • Conclusion
Introduction • Peer-to-Peer system • A system of sharing resources between participating nodes • Centralized System • Central server (ex. Napster) • Decentralized System • No central server (ex. Gnutella, CAN, Chord, Pastry) • Each node can be a server, a client and a router • OceanStore Project • A global persistent data store designed to scale to billions of users • It provides a consistent, highly-available, and durable storage utility atop an infrastructure comprised of untrusted servers • Need for locality • Tapestry is well-suited
Tapestry • A peer-to-peer overlay routing infrastructure offering efficient, scalable, location-independent routing of messages directly to nearby copies of an object or service using only localized resources • An extensible infrastructure that provides decentralized object location and routing • Tapestry exploit locality in routing messages • Provides API to P2P application developers • Maintain network integrity under dynamic network conditions
Tapestry – DOLR API • PublishObject(OG,Aid) : Publishes Object O on the local node • UnpublishObject(OG,Aid) : Removes location mappings for O • RouteToObject(OG,Aid) : Routes message to location of O with GUID • RouteToNode(N,Aid,Exact) : Routes messages to application Aid on node N Nid : NodeID (uniformly assigned at random from a large identifier space) OG : Object GUID (selected from the same identifier space) Aid : application-specific identifier * Multiple applications can share a single large Tapestry Overlay Network
Object Location Pointers <ObjID, NodeID> Routing for an object with 435A Object Store Routing for an object with 1A3B Back pointers Tapestry – a Node • Maintains a routing table consisting of nodeIDs and IP addresses of neighbors of the local node • Forwarding to nodes whose nodeIDs are closer to identifier of an object (Matching larger prefix) 40 Hex digit created by hashing algorithm L1 L2 L3 L4
Tapestry – Mesh and Routing • Mesh & Routing example • From 5230 to 42AD across Tapestry
Tapestry – Publication and Location • Each node along the publication path stores a pointer mapping <OG,S>
42A3 Ack multicast Tapestry – Node Insertion • Inserting a new node N into Tapestry network • Notify need-to-know nodes of N, N fills null entries in their routing tables • Move locally rooted object references to N • Acknowledged multicast • Add N to their routing table • Transfer references of locally rooted pointers • Construct locally optimal routing table for N • Notify nearby nodes to N for optimization
Tapestry – Deletion • Voluntary node deletion • Using backpointers, pointed nodes update their own table with a replacement node • The replacement node republishes • Node N routes references locally rooted objects to their new roots • Involuntary node deletion • Failure-prone network such as Internet • Periodic beacon to detect outgoing link and node failures • Republishing of object references
According to arrival or departure of neighbors, add or remove object pointers Update Update Continuous link monitoring : fault detection, latency, loss rate estimation Tapestry Architecture DELIVER (Gid, Aid, Msg) FORWARD (Gid, Aid, Msg) ROUTE (Gid,Aid,Msg,NextHopNode) Examine destination GUID of messages Determine their next hop from routing table and local object pointers TCP/IP, UDP/IP
Experiment • Environment • Local cluster, PlanetLab, Simulator(SOSS) • Micro-benchmarks on local cluster • Message processing overhead • Proportional to processor speed - Can utilize Moore’s Law • Message throughput • Optimal size is 4KB • Implemented by Java
Experiment Result(1) • Efficiency (routing overhead) • RDP • the ratio of distance traveled via Tapestry location and routing, versus that traveled via direct routing to the object.
Experiment Result(2) • Object Location Optimization • With additional pointers • k Backup nodes of the next hop of the publish path • nearest l neighbors of current hop • applying them along the first m hops of the path • 1092 nodes, 25 objects
Experiment Result(3) • Scalability • Repeatedly insertion and deletion of a single node 20 times • Integration latency and bandwidth
Experiment Result(4) • Resilience against network dynamics 20 times
Experiment Result(5) • Resilience against network dynamics Churn 1 : poisson process with average interarrival time of 20s average life time : 4min Churn 2 : 10s, 2min
Conclusion • Tapestry • An overlay routing network for decentralized P2P • Provides API infrastructure for application developers • Provides efficient and scalable routing of message directly nodes in a large, sparse address space • Resilient in dynamic network
Thanks for your attention ! Any Question ??