330 likes | 455 Views
Scalable On-demand Media Streaming with Packet Loss Recovery. Anirban Mahanti Department of Computer Science University of Calgary Calgary, AB T2N 1N4 Canada. Objectives. Context: Video-on-demand applications on the Internet, satellite & cable television networks
E N D
Scalable On-demand Media Streaming with Packet Loss Recovery Anirban Mahanti Department of Computer Science University of Calgary Calgary, AB T2N 1N4 Canada
Objectives • Context: • Video-on-demand applications on the Internet, satellite & cable television networks • E.g., Online courses, movies, interactive TV • Goals: • Scalable and reliable streaming
Outline • Background • Reliable Periodic Broadcast (RPB) • SWORD Prototype • Summary
Video-on-Demand Distribution Model • A client can tune in to receive any ongoing media delivery using its Set Top Box • True broadcast: Satellite and cable TV networks • Multipoint delivery provided in the Internet by IP-Multicast or Application Level Multicast
Traffic Assumptions • 100s – 1000s requests for a media file per play duration • Skewed popularity of media files • 10% – 20% of the files account for 80% of the requests
Scalable Streaming Protocols: Overview • Bounded Delay Protocols • Batching, Periodic Broadcasts • Tradeoff: start-up delay vs. bandwidth • Immediate Service Protocols • Patching, Bandwidth Skimming • Tradeoff: request rate vs. bandwidth
Channel 1 Channel 2 Channel 3 0 30 60 90 120 150 180 210 240 Time (minutes) Batching Example • Playback rate = 1 Mbps, duration = 90 minutes • Group requests in non-overlapping intervals of 30 minutes: • Max. start-up delay = 30 minutes • Bandwidth required = 3 channels = 3 Mbps • Bandwidth increases linearly with decrease in start-up delay
1 1 1 1 1 2 2 1 Channel 1 2 Channel 2 0 30 60 90 120 150 180 Time (minutes) Periodic Broadcast Example • Partition the media file into 2 segments with relative sizes {1, 2}. For a 90 min. movie: • Segment 1 = 30 minutes, Segment 2 = 60 minutes • Advantage: • Max. start-up delay = 30 minutes • Bandwidth required = 2 channels = 2 Mbps • Disadvantage: Requires increased client capabilities
Skyscraper Broadcasts (SB) [Hua & Sheu 1997] • Divide the file into Ksegments of increasing size • Segment size progression: 1, 2, 2, 5, 5, 12, 12, 25, … • Multicast each segment on a separate channel at the playback rate • Aggregate rate to clients: 2 x playback rate
Harmonic Broadcasts [Juhn & Tseng, 1998]
Periodic Broadcast Protocols: Summary • Lower bound tells us that just-in-time delivery results in least server bandwidth usage • Protocols such as Skyscraper broadcast the initial portion more often than the later portions • Harmonic Broadcasting attempts to delay the delivery of media by using low rate channels • Periodic broadcast • very short latency to play the media • (nearly) minimum server bandwidth • Internet delivery ? • How to provide packet loss recovery?
“Digital Fountain” Approach [Vicisano et al. 1998, Byers et al.1998] • A single multicast stream of FEC encoded data • Each client listens until P packets arrive • Client decodes after all packets arrive (long latency)
Periodic Broadcasts: Performance • Lower Bound:
Unicast Service Unicast Streaming Broadcast/Multicast Service Immediate Streaming Patching Batching Bandwidth Skimming Bounded Delay Streaming Periodic Broadcasts Digital Fountain (bulk data) Reliable Delivery Delivery Techniques: Summary ? ?
Outline • Background • Reliable Periodic Broadcast (RPB) • SWORD Prototype • Rate Adaptation • Quality Adaptation • Summary
Packet Loss Recovery • Make each channel a Digital Fountain • Is Skyscraper amenable to the Digital Fountain approach? • No! • Some segments played while being received • Reception schedule requires tuning into channels at precise times • Other limitations of Skyscraper: • Ad hoc segment size progress • Does not work for low client data rates
Reliable Periodic Broadcasts (RPB) • Optimized PB protocols (no packet loss recovery) • client fully downloads each segment before playing • required server bandwidth near minimal • Segment size progression is not ad hoc • Works for client data rates < 2 x playback rate • extend for packet loss recovery • extend for “bursty” packet loss • extend for client heterogeneity
Optimized Periodic Broadcasts • r = segment streaming rate = 1 • s = maximum # streams client listens to concurrently = 2 • b = client data rate = s x r = 2 • length of first s segments: • length of segment k s:
Optimized PB: Performance r = segment transmission rate, s = max. # streams client listens to concurrently b = client data rate = s x r
Basic Reliable Periodic Broadcasts p = max. cumulative loss rate for uninterrupted playback • encode each segment, multicast infinite stream of segment data • a = “stretch” applied to listening time on each stream • length of first s segments: • length of segment k s: a = 1/(1-p)
Basic RPB Protocol: Performance • 10% max. cumulative packet loss (i.e., p 0.1, a 1/0.9) • b = client data rate = ssegment streaming rate (r)
RPB: Tolerating Bursty Loss • Larger ‘a’ for initial segments at the cost of smaller a for later segments • Cumulative loss protection for the whole object = 0.10, • B = 10, s = 8, b = 2, and d = 0.0017 T:
RPB: Client Heterogeneity • B = 10, b = 2, s = 8, p = 0.1
Outline • Background • Reliable Periodic Broadcast (RPB) • SWORD Prototype • Summary
SWORD Prototype • Server side: encoding data stream, multicast streaming, merge algorithm • Client side: decoding data before sending to player
For Details … • Anirban Mahanti, “Scalable Reliable On-Demand Media Streaming Protocols”, Ph.D. Thesis, Dept. of Computer Science, Univ. of Saskatchewan, March 2004. • Anirban Mahanti, Derek L. Eager, Mary K. Vernon, David Sundaram-Stukel, “Scalable On-Demand Media Streaming with Packet Loss Recovery”, IEEE/ACM Trans. On Networking, April 2003. Also in ACM SIGCOMM 2001. • Email:mahanti@cpsc.ucalgary.ca • http://pages.cpsc.ucalgary.ca/~mahanti
Scalable Streaming Protocols: Overview • Bounded Delay Protocols • Batching, Periodic Broadcasts • Tradeoff: start-up delay vs. bandwidth • Immediate Service Protocols • Patching, Bandwidth Skimming • Tradeoff: request rate vs. bandwidth
Patching [Carter & Long 1997, Hua et al. 1998] • Clients use a “patch” stream to catch-up with the “root” stream • Server Bandwidth scales as square root
Bandwidth Skimming [Eager et al. 1999] • Allocate a multicast stream to each client; a client also listens to closest earliest active stream • Bandwidth scales logarithmically
Bandwidth Skimming: Performance • Bandwidth Skimming better than Patching • Bandwidth Skimming policies allow merging for b < 2
RBS Performance • 10% Packet Loss