330 likes | 579 Views
CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live Media Streaming. Jiangchuan Liu with Xinyan Zhang, Bo Li, and T.S.P.Yum Infocom 2005. Some Facts. DONet – D ata-driven O verlay Net work CoolStreaming – Co operative O ver l ay Streaming
E N D
CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live Media Streaming Jiangchuan Liu with Xinyan Zhang, Bo Li, and T.S.P.Yum Infocom 2005
Some Facts DONet– Data-driven Overlay Network CoolStreaming– Cooperative Overlay Streaming First release (CoolStreaming v0.9) • May 2004 Till March 2005 • Downloads: >100,000 • Average online users: 6,000 • Peak-time online users: 14,000 • Google entries (CoolStreaming): 5130
Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work
Motivation • Enable large-scale live broadcasting in the Internet environment • Capacity limitation • Streaming: 500Kbps, server outbound band: 100Mbps • 200 concurrent users only • Network heterogeneity • No QoS guarantee
Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work
Related Solutions • Content distribution networks • Expensive • Not quite scalable for a large number of audiences • Self-organized overlay networks • Application layer multicast • Peer-to-peer communications
Related Solutions • Content distribution networks • Expensive • Live streaming (?) • Self-organized overlay networks • Application layer multicast • Peer-to-peer communications
Application Layer Multicast • Issue: Structure construction • Tree • NICE, CoopNet, SpreadIt, ZIGZAG • Mesh • Narada and its extension • Multi-tree • SplitStream
Application Layer Multicast (cont’d) • Issue: Node dynamics • Structure maintenance • Passive/proactive repairing algorithms • Advanced coding • PALS (layered coding) • CoopNet (multiple description coding)
Gossip-based Dissemination • Gossip • Iteration • Sends a new message to a random set of nodes • Each node does similarly in the next round • Pros: Simple, robust • Cons: Redundancy, delay • Related • Peer-to-peer on-demand streaming
Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work
Data-driven Overlay (DONet) • Target • Live media broadcasting • No IP multicast support • Core operations • Every node periodically exchanges data availability information with a set of partners • Then retrieves unavailable data from one or more partners, or supplies available data to partners
Features of DONet • Easy to implement • no need to construct and maintain a complex global structure • Efficient • data forwarding is dynamically determined according to data availability, not restricted by specific directions • Robust and resilient • adaptive and quick switching among multi-suppliers
Key Modules • Membership manager • mCache – partial overlay view • Update by gossip • Partnership manager • Random selection • Partner refinement • Transmission Scheduler
Transmission Scheduling Problem: From which partner to fetch which data segment ? • Constraints • Data availability • Playback deadline • Heterogeneous partner bandwidth
Scheduling algorithm • Variation of Parallel machine scheduling • NP-hard • Heuristic • Message exchanged • Window-based buffer map (BM): Data availability • Segment request (piggyback by BM) • Less suppliers first • Multi-supplier: Highest bandwidth within deadline first • Simpler algorithm in current implementation • Network coding ?
Analysis on DONet • Coverage ratio for distance k • E.g. 95% nodes are covered in 6 hops for M=4 • Average distance O(logN) • DONet vs Tree-based overlay • Much lower outage probability
Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work
PlanetLab Experiments • Distributed experimental system • DONet Module • Console and automation • Command dispatching and report collection • Caveats • Scalability • Reproducibility • Representability
Geographical Node Distribution May 24, 2004 # of Active Node: 200-300
Planet-Lab Result • Data continuity, 200 nodes, 500 kbps streaming
Implementation: CoolStreaming • First release: May 30, 2004 • Source code: 2000-line Python • Programming time: • PlanetLab prototype: 2 weeks • Export from prototype: 2 weeks • Support formats: • Real Video/Windows Media • Platform/media independent • Scale and capacity • Total downloads: • Peak time: 14000 concurrent users • Streaming rate: 450-700kbps
User Distribution (June 2004) • Heterogeneous network environment • LAN, DSL, CABLE...
Online Statistics (Jun 21, 2004) Average Packet Loss around 1% - 5%
Observations • Current Internet has enough available band to support TV-quality streaming (>450Kbps) • Bottleneck: server, end-to-end bandwidth • Larger data-driven overlay better streaming quality • Capacity amplification
Outline • Motivation • Background and related work • Design of DONet/CoolStreaming • Implementation and empirical Study • Future work
Future of DONet/Coolstreaming • Content • Solution: DONet/Coolstreaming as a capacity amplifier between content provider and clients • Virtually part of network infrastructure • Enhancement • Scheduling algorithm • Simplified version • Network coding • Transport protocol • TCP (?)
Future of DONet/Coolstreaming • Enhancement (cont’d) • User interface • Combined with caching • Combined with CDN • Provide world-wide reliable media streaming service • On-demand streaming
Q & A Thanks