1 / 45

Distributed Transactional Memory for General Networks

Distributed Transactional Memory for General Networks. Gokarna Sharma Costas Busch Srivathsan Srinivasagopalan Louisiana State University May 24, 2012. Distributed Transactional Memory. Transactions run on network nodes

tavon
Download Presentation

Distributed Transactional Memory for General Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Transactional Memory for General Networks Gokarna Sharma Costas Busch SrivathsanSrinivasagopalan Louisiana State University May 24, 2012

  2. Distributed Transactional Memory • Transactions run on network nodes • They ask for shared objects distributed over the network for either read or write • They appear to execute atomically • The reads and writes on shared objects are supported through three operations: • Publish • Lookup • Move

  3. Suppose the object ξ is at node and is a requesting node Requesting node Predecessor node ξ Suppose transactions are immobile and the objects are mobile

  4. Lookup operation Read-only copy Main copy ξ ξ Replicates the object to the requesting node

  5. Lookup operation Read-only copy ξ Read-only copy Main copy ξ ξ Replicates the object to the requesting nodes

  6. Move operation Main copy Invalidated ξ ξ Relocates the object explicitly to the requesting node

  7. Move operation Main copy ξ Invalidated Invalidated ξ ξ Relocates the object explicitly to the requesting node

  8. Need a distributed directory protocol • To provide objects to the requesting nodes efficiently implementing Publish, Lookup, and Move operations • To maintain consistency among the shared object copies

  9. Existing Approaches • D is the diameter of the network kind • S* is the stretch of the tree used

  10. Scalability Issues/Race Conditions • Locking is required From root Lookup from C is probing parent(B) at t lookup parent(A) Level k+1 Probes left to right move parent(A) Level k parent(A) A • C • B Level k-1 object Ballistic configuration at time t

  11. Spiral directory protocol for general networks with O(log2n . log D) stretch avoiding race conditions

  12. In The Remaining… • Model • Hierarchical Directory Construction • How Spiral Supports Publish, Lookup, and Move • Analogy to a Distributed Queue • Spiral Hierarchy Parameters and Analysis • Lookup Stretch • Move Stretch • Discussion

  13. Model • General network G = (V,E) of n reliable nodes with diameter D • Onesharedobject • Nodes receive-compute-send atomically • Nodes are uniquely identified • Node u can send to node v if it knows v • One node executes one request at a time

  14. Spiral Approach: Hierarchical clustering Network graph

  15. Spiral Approach: Hierarchical clustering Alternative representation as a hierarchy tree with leader nodes

  16. At the lowest level (level 0) every node is a cluster Directories at each level cluster, downward pointer if object locality known

  17. A Publish operation root Owner node ξ • Assume that is the creator of which invokes the Publish operation • Nodes know their parent in the hierarchy ξ

  18. Send request to the leader root

  19. Continue up phase root Sets downward pointer while going up

  20. Continue up phase root Sets downward pointer while going up

  21. Root node found, stop up phase root

  22. root Predecessor node ξ A successful Publish operation

  23. Supporting a Move operation root Requesting node Predecessor node ξ • Initially, nodes point downward to object owner (predecessor node) due to Publish operation • Nodes know their parent in the hierarchy

  24. Send request to leader node of the cluster upward in hierarchy root

  25. Continue up phase until downward pointer found root Sets downward path while going up

  26. Continue up phase root Sets downward path while going up

  27. Continue up phase root Sets downward path while going up

  28. Downward pointer found, start down phase root Discards path while going down

  29. Continue down phase root Discards path while going down

  30. Continue down phase root Discards path while going down

  31. Predecessor reached, object is moved from node to node root Lookup is similar without change in the directory structure and only a read-only copy of the object is sent

  32. Distributed Queue root u tail head u

  33. Distributed Queue root u v tail head u v

  34. Distributed Queue root u v w tail head u w v

  35. Distributed Queue root v w tail head u w v

  36. Distributed Queue root w tail head u w v

  37. root Spiral is Starvation Free All requests terminate. u w v • There is always a path of downward pointers from the root node to a leaf node. • No set of finite number of requests whose successor links form a cycle. • All the requests terminate in a bounded amount of time.

  38. Spiral avoids Race condition • Do not need to lock simultaneously multiple parent nodes in the same label. • Label all the parents in each level and visit them in the order of the labels. From root parent(B) lookup parent(A) Level k+1 3 2 1 move parent(A) Level k parent(A) A • C • B Level k-1 object

  39. Spiral Hierarchy • (O(log n), O(log n))-labeled sparse cover hierarchy constructed from O(log n) hierarchical partitions • Level 0, each node belongs to exactly one cluster • Level h, all the nodes belong to one cluster with root r • Level 0 < i < h, each node belongs to exactly O(log n) clusters which are labeled different

  40. Spiral Hierarchy • How to find a predecessor node? • Via spiral paths for each leaf node u • by visiting leaders of all the clusters • that contain u from level 0 to the • root level • The hierarchy guarantees: • (1) For any two nodes u,v, their • spiral paths p(u) and p(v) meet at • level min{h, log(dist(u,v))+2} • (2) length(pi(u)) is at most O(2i log2n) root u v p(u) p(v)

  41. (Canonical) downward Paths root root u v u p(u) p(v) p(u) p(v) is a (canonical) downward path

  42. Analysis: lookup Stretch If there is no Move, a Lookup r from w finds downward path to v in level log(dist(u,v))+2 = O(i) When there are Moves, it can be shown that r finds downward path to v in level k = O(i + log log2n) x Level k Level i O(2k logn) vi O(2klog2n) spiral path p(w) O(2i log2n) p(v) Canonical path v w 2i C(r)/C*(r) = O(2k log2n)+O(2k logn)+O(2ilog2n) / 2i-1 = O(log4n)

  43. Analysis: move Stretch Level Assume a sequential execution R of l+1 Move requests, where r0 is an initial Publish request. C*(R) ≥ max1≤k≤h (Sk-1) 2k-1 C(R) ≥ log2n) r0 . . . r0 . . . r0 r0 r0 h . . . k . . . 2 1 0 r1 . . r1 r1 r1 r2 r2 r2 . . r2 r2 r2 r2 . . rl . . . rl rl rl rl-1 rl-1 rl-1 . . . Thus, w y x u v request C(R)/C*(R) = log2n) / max1≤k≤h (Sk-1) 2k-1 = O(log2n. h) max1≤k≤h (Sk-1) 2k-1 / max1≤k≤h (Sk-1) 2k-1 = O(log2n. log D)

  44. Summary • A distributed directory protocol Spiral for general networks that • Has poly-logarithmic stretch • Is starvation free • Avoids race conditions • Factors in the stretch are mainly due to the parameters of the hierarchical clustering

  45. Thank you!!!

More Related