300 likes | 437 Views
Scalable Application Layer Multicast. Suman Banerjee Bobby Bhattacharjee Christopher Kommareddy. ACM SIGCOMM Computer Communication Review , Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications, Vol. 32 no. 4, Aug 2002.
E N D
Scalable Application Layer Multicast Suman Banerjee Bobby Bhattacharjee Christopher Kommareddy ACM SIGCOMM Computer Communication Review , Proceedings of the 2002 conference on Applications, technologies, architectures, and protocols for computer communications, Vol. 32 no. 4, Aug 2002
Outline • Introduction • Hierarchical Membership • Protocol Description • Simulation Results • Experiment Results
Introduction • Network layer multicast • not widely deployed • Application layer multicast • Do not change the network infrastructure • Implement multicast forwarding functionality exclusively at end-hosts
Introduction • NICE - NICE is the Internet Cooperative Environment • NICE is designed to • Support large receiver sets • Small control overhead • Low latency distribution trees
Hierarchical Membership • Clients are assigned to different layers • Each layer is partitioned into a set of clusters of size between k and 3k – 1, where k is a constant parameter • All hosts belong to the lowest layer L0 • The host with the minimum maximum distance to all other hosts in the cluster is chose to be the leader
Hierarchical Membership • Leaders of clusters of Li join layer Li+1 • At most logkN layers • Each host maintains state about all the clusters it belongs to and about its super-cluster
Control Paths • Control paths • Exchange periodic state refreshes • For a host X, the peers on its control topology are the other members of the clusters to which X belongs • E.g. • For A0 – A1, A2, B0 • For B0 – A0, A1, A2, B1, B2, C0
Data Paths • Data paths • Source-specific tree • Run the following algorithm
Data Paths • Host h received data from host p • Forward the received packets to all clusters that h belongs, except that of p
NICE Trees Analysis • Worst case control overhead of cluster-leader of the highest layer cluster • O(k * logkN) • Average overhead • No. of hops on data path • O(logkN)
Protocol Description • Assumption • All hosts know the “Rendezvous Point” (RP) host • RP is always the leader of the single cluster in the highest layer • RP interacts with other hosts on control path, but not data path
New Host Joins • Join procedure • Contact RP to get the cluster members of the highest layer • Loop until reach layer 0 • Query the members of the returned cluster and find the closest one, X • Get the members of the child-cluster of X
New Host Joins • Overhead: O(k * logkN) • Latency: O(logkN) * RTT
New Host Joins • Cluster-leader may change as member joins or leaves • A change in leadership of a cluster C, in layer Lj • Current leader of C removes itself from all layers > Lj • Each affected layers choose a new leader • The new leaders join their super-cluster • If the state of super-cluster is not locally available, contact RP
Cluster Maintenance and Refinement • Cluster-leader periodically checks the size of its cluster in layer Li • If the cluster size exceeds the 3k - 1 limit • Split the cluster into two equal-sized clusters such that the maximum of the radii among the two clusters is minimized • If the cluster size is under k • The leader finds a closest host in layer Li+1 and merge with it
Cluster Maintenance and Refinement • Each member, H, in any layer Li periodically probes all members in its super-cluster, to identify the closest member • If a host, J, that is closer to H is found, then H joins the cluster under the J
Host Departure and Leader Selection • Node H leaves • Graceful leave • Send a leave message to all clusters it belongs • Ungraceful leave • Other hosts detect the leave by not receiving the periodic refresh of H • If H is leader • Each remaining member, J, select a new leader independently • Multiple leaders are resolved by the exchange of refreshes
Conclusions • Proposed a new application layer multicast protocol • Low control overhead • Low link stress