240 likes | 341 Views
Adaptive Multi-Source Streaming in Heterogeneous Peer-to-Peer Networks. Vikash Agarwal, Reza Rejaie Computer and Information Science Department University of Oregon http://mirage.cs.uoregon.edu January 19, 2005. Introduction. P2P streaming becomes increasingly popular
E N D
Adaptive Multi-Source Streaming in Heterogeneous Peer-to-Peer Networks Vikash Agarwal, Reza Rejaie Computer and Information Science Department University of Oregon http://mirage.cs.uoregon.edu January 19, 2005
Introduction • P2P streaming becomes increasingly popular • Participating peers form an overlay to cooperatively stream content among themselves • Overlay-based approach is the only way to efficiently support multi-party streaming apps without multicast • Two components: • Overlay construction • Content delivery • Each peer desires to receive max. quality that can be streamed through its access link • Peers have asymmetric & heterogeneous BW connectivity • Each peer should receive content from multiple parent peers => Multi-source streaming. • Multi-parent overlay structure rather than tree
Benefits of Multi-source Streaming • Higher bandwidth to each peer • higher delivered quality • Better load balancing among peers • Less congestion across the network • More robust to dynamics of peer participation • Multi-source streaming introduces new challenges …
Multi-source streaming: Challenges • Congestion controlled connections from different parent peers exhibit • independent variations in BW • different RTT, BW, loss rate • Aggregate bandwidth changes over time • Streaming mechanism should be quality adaptive • Static “one-layer-per-sender” approach is inefficient • There must be a coordination mechanism among senders in order to • Efficiently utilize aggregate bandwidth • Gracefully adapt delivered quality with BW variations • This paper presents a receiver-driven coordination mechanism for multi-source streaming called PALS
Previous Studies • Congestion control was often ignored • Server/content placement for streaming MD content [Apostolopoulos et al.] • Resource management for P2P streaming [Cue et al.] • Multi-sender streaming [Nguyen et al], but they assumed • Aggregate BW is more than stream BW • RLM is receiver-driven but .. • RLM tightly couples coarse quality adaptation with CC • PALS only determines how aggregate BW is used • P2P content dist. mechanism can not accomodate “streaming” apps • e.g. BitTorrent, Bullet
Overall Architecture • Overall architecture for P2P streaming • PRO: Bandwidth-aware overlay construction • Identifying good parents in the overlay • PALS: Multi-source adaptive streaming • Streaming content from selected parents • Distributed multimedia caching • Decoupling overlay construction from delivery provides great deal of flexibility • PALS is a generic multi-source streaming protocol for non-interactive applications
Assumptions: All peers/flows are cong. controlled Content is layered encoded All layers are CBR with the same cons. rate* All senders have all layers (relax this later)* Limited window of future packets are available at each sender Live but non-interactive * Not requirements Goals: To fully utilize aggregate bandwidth to dynamically maximize delivered quality Deliver max no of layers Minimize variations in quality Assumptions & Goals
P2P Adaptive Layered Streaming (PALS) • Receiver: periodically requests an orderedlist of packets/segments from each sender. • Sender: simply delivers requested packets with the given order at the CC rate • Benefits of ordering the requested list: • Provide flexibility for the receiver to closely control delivered packets • Graceful degradation in quality when bandwidth suddenly drops • Periodic requests => stability & less overhead
Peer 0 Peer 1 Peer 2 Internet BW BW BW 2 1 0 Basic Framework • Receiver passively monitors EWMA BW from each sender • EWMA aggregate BW • Estimate total no of pkts to be delivered during next window (K) • Allocate K pkts among active layers (Quality Adaptation) • Controlling bw0(t), bw1(t), …, • Controlling evolution of buf. state. • Assign a subset of pkts to each sender (Packet assignment) • Allocating each sender’s bw among active layers Demux bw (t) bw (t) bw (t) 1 2 3 bw (t) 0 buf buf buf buf 1 2 0 3 C C C C Decoder
Key Components of PALS • Sliding Window (SW): to keep all senders busy & loosely synchronized with receiver playout time • Quality adaptation (QA): to determine quality of delivered stream, i.e. required packets for all layers during one window • Packet Assignment (PA): to properly distribute required packets among senders
Sliding Window • Buffering window: range of timestamps for packets that must be requested in one window. • Window is slided forward in a step-like fashion • Requested packets per window can be from • Playing window (loss recovery) • Buffering window (main group) • Future windows (buffering)
Sliding Window (cont’d) • Window size determines the tradeoff between smoothness or signaling overhead & responsiveness • Should be a function of RTT since it specifies timescale of variations in BW • Multiple of max smoothed RTT among senders • Receiver might receive duplicates • Re-requesting the packet that is in flight! • Ratio of duplicates are very low and can be reduced by increasing window
Coping with BW variations • Sliding window is insufficient • Coping with sudden drop in BW by • Overwriting request at senders • Ordering requested packets • Coping with sudden increase in BW by • Requesting extra packets
Peer 2 Peer 0 Peer 1 Internet BW BW BW 2 1 0 Quality Adaptation • Determining required packets from future windows • Coarse-grained adaptation • Add/drop layer • Fine-grained adaptation • Controlling bw0(t), bw1(t), …, • Loosely controlling evolution of receiver buffer state/dist. • What is a proper buffer dist? • Buffer distribution determines what degree of BW variations can be smoothed. Demux bw (t) bw (t) bw (t) 1 2 3 bw (t) 0 buf buf buf buf 1 2 0 3 C C C C Decoder
Buffer Distribution • Impact on delivered quality • Conservative buf. distribution achieves long-term smoothing • Aggressive buf. distribution achieves short-term improvement • PALS leverages this tradeoff in a balanced fashion • Window size affects buffering: • Amount of future buffering • Slope of buffer distribution • Multiple opportunities to request a packet (see paper) • Implicit loss recovery
Packet Assignment • How to assign an ordered list of selected pkts from diff. layers to individual senders? • Number of assigned pkts to each sender must be proportional to its BW contribution • More important pkts should be delivered • Weighted round robin pkt assignment strategy • Extended this strategy to support partially available content at each peer • Please see paper for further details
Performance Evaluation • Using ns simulation to control BW dynamics • Focused on three key dynamics in P2P systems: BW variations, Peer participation, Content availability • Senders with heterogeneous RTT & BW • Decouple underlying CC mechanism from PALS • Performance Metrics: BW Utilization, Delivered Quality • Two strawman mechanisms with static layer assignment to each sender: • Single Layer per Sender (SLS): Sender i delivers layer i • Multiple Layer per Sender (MLS): Sender i delivers layer j<i
Necessity of Coordination • SLS & MLS exhibit high variations in quality • No explicit loss recovery • No coordination • Inter-layer dependency magnifies the problem • PALS effectively utilizes aggregate BW & delivers stable quality in all cases
Delay-Window Tradeoff • Avg. delivered quality only depends on agg. BW Heterogeneous senders • Higher Delay => smoother quality • Duplicates exponentially decrease with window size • Avg. per-layer buffering linearly increases with Delay • Increasing window leads to even buffer dist. • See paper for more results.
Conclusion & Future Work • PALS is a receiver-driven coordination mechanism for streaming from multiple cong. controlled senders. • Simulation results are very promising • Future work: • Further simulation to examine further details • Prototype implementation for real experiments • Integration with other components of our architecture for P2P streaming
Partially available content • Effect of segment size and redundancy