1 / 48

Cost-Effective Video Streaming Techniques

Cost-Effective Video Streaming Techniques. Kien A. Hua School of EE & Computer Science University of Central Florida Orlando, FL 32816-2362 U.S.A. Server Channels. Videos are delivered to clients as a continuous stream.

Download Presentation

Cost-Effective Video Streaming Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cost-Effective Video Streaming Techniques Kien A. Hua School of EE & Computer Science University of Central Florida Orlando, FL 32816-2362 U.S.A

  2. Server Channels • Videos are delivered to clients as a continuous stream. • Server bandwidth determines the number of video streams can be supported simultaneously. • Server bandwidth can be organized and managed as a collection of logical channels. • These channels can be scheduled to deliver various videos.

  3. Using Dedicated Channel Video Server Dedicated stream Client Client Client Client TooExpensive!

  4. Batching • FCFS • MQL (Maximum Queue Length First) • MFQ (Maximum Factored Queue Length) Can multicast provide true VoD ?

  5. Challenges – conflicting goals • Low Latency: requests must be served immediately • Highly Efficient: each multicast must still be able to serve a large number of clients

  6. Some Solutions • Patching [Hua98] • Range Multicast[Hua02]

  7. Patching Video Regular Multicast A

  8. Proposed Technique: Patching Video Player Buffer B Skew point Video t Patching Stream Regular Multicast A

  9. Proposed Technique: Patching Video 2t Regular Multicast Skew point is absorbed by client buffer Video Player Buffer A B

  10. Regular Multicast Patching Multicast Video Server Patching Regular Stream Stream Lp Lr Lr Lp Lr Buffer Data Loader Video Video Video Player Player Player Client C Client B Client A Client Design

  11. Server Design Server must decide when to schedule a regular stream or a patching stream time r p p p r p p A B C D E F G Multicast group Multicast group

  12. Two Simple Approaches • If no regular stream for the same video exists, a new regular stream is scheduled • Otherwise, two policies can be used to make decision: Greedy Patching and Grace Patching

  13. Buffer Size Buffer Size Video Length A Shared Data B Shared Data C Shared Data D Greedy Patching Patching stream is always scheduled Time

  14. Buffer Size Video Length A Shared Data B Regular Stream C Grace Patching If client buffer is large enough to absorb the skew,a patching stream is scheduled; otherwise, a new regular stream is scheduled. Time

  15. Performance Study • Compared with conventional batching • Maximum Factored Queue (MFQ) is used • Performance metric is average service latency

  16. Parameter Default Range Number of videos 100 N/A Video length (minutes) 90 N/A Video Access Skew factor 0.7 N/A Server bandwidth (streams) 1,200 400-1,800 Client buffer (min of data) 5 0-10 Request rate (requests/min) 50 10-90 Number of requests 200,000 N/A Simulation Parameters

  17. 600 500 Conventional Batching Greedy Patching 400 Grace Patching 300 Average Latency (Seconds) 200 100 0 400 600 800 1000 1200 1400 1600 1800 Server Communication BW (streams) Defection Request Rate Client Buffer No 50 arrivals/minute 5 minutes Effect of Server Bandwidth

  18. 200 180 160 140 120 100 Average Latency (seconds) 80 Conventional Batching 60 Greedy Patching 40 Grace Patching 20 0 0 1 2 3 4 5 6 7 8 9 10 Client Buffer Size (minutes of data) Defection Request Rate Server Bandwidth No 50 arrivals/minute 1,200 streams Effect of Client Buffer

  19. 250 200 150 Average Latency (seconds) 100 Conventional Batching Greedy Patching 50 Grace Patching 0 10 20 30 40 50 60 70 80 90 100 110 Request Rate (requests/minutes) Defection Client Buffer Server Bandwidth No 5 minutes 1,200 streams Effect of Request Rate

  20. Optimal Patching time patchingwindow patching window r p p p r p p A B C D E F G Multicast group Multicast group What is the optimal patching window ?

  21. Optimal Patching Window • D is the mean total amount of data transmitted by a multicast group • Minimize Server Bandwidth Requirement, D/W ,under various W values Buffer Size Buffer Size Video Length A W

  22. Optimal Patching Window • Compute D, the mean amount of data transmitted for each multicast group • Determine  , the average time duration of a multicast group • Server bandwidth requirement is D/ which is a function of the patching period • Finding the patching period that minimize the bandwidth requirement

  23. Candidates for Optimal Patching Window

  24. new arrivals -5% +5% C B A departures Piggybacking[Golubchik96] • Slow down an earlier service and speed up the new one to merge them into one stream • Limited stream sharing due to long catch-up delay • Implementation is complicated

  25. Concluding Remarks • Unlike conventional multicast, requests can be served immediately under patching • Patching makes multicast more efficient by dynamically expanding the multicast tree • Video streams usually deliver only the first few minutes of video data • Patching is very simple and requires no specialized hardware

  26. Patching on Internet • Problem: • Current Internet does not support multicast • A Solution: • Deploying an overlay of software routers on the Internet • Multicast is implemented on this overlay using only IP unicast

  27. Content Routing Each router forwards its Find messages to other routers in a round-robin manner.

  28. Removal of An Overlay Node Inform the child nodes to reconnect to the grandparent

  29. Failure of Parent Node • Data stop coming from the parent • Reconnect to the server

  30. Slow Incoming Stream Reconnect upward to the grandparent

  31. Downward Reconnection • When reconnection reaches the server, future reconnection of this link goes downward. • Downward reconnection is done through a sibling node selected in a round-robin manner. • When downward reconnection reaches a leave node, future reconnection of this link goes upward again.

  32. Limitation of Patching • The performance of Patching is limited by the server bandwidth. • Can we scale the application beyond the physical limitation of the server ?

  33. Chaining [Hua97] • Using a hierarchy of multicasts • Clients multicast data to other clients in the downstream • Demand on server bandwidth is substantially reduced

  34. Video Server Client A Screen Client B disk Screen Client C disk Screen disk Chaining Highly scalable and efficient Implementation is complex

  35. Range Multicast [Hua02] • Deploying an overlay of software routers on the Internet • Video data are transmitted to clients through these software routers • Each router caches a prefix of the video streams passing through • This buffer may be used to provide the entire video content to subsequent clients arriving within a buffer-size period

  36. Range Multicast Group • Four clients join the same server stream at different times without delay • Each client sees the entire video Buffer Size: Each router can cache 10 time units of video data. Assumption: No transmission delay

  37. Multicast Range • All members of a conventional multicast group share the same play point at all time • They must join at the multicast time • Members of a range multicast group can have a range of different play points • They can join at their own time Multicast Range at time 11: [0, 11]

  38. Network Cache Management • Initially, a cache chunk is free. • When a free chunk is dispatched for a new stream, the chunk becomes busy. • A busy chunk becomes hot if its content matches a new service request.

  39. RM vs. Proxy Servers

  40. 2-Phase Service Model(2PSM) [Hua99]Browsing Videos in a Low Bandwidth Environment

  41. Search Model • Use similarity matching or keyword search to look for the candidate videos. • Preview some of the candidates to identify the desired video. • Apply VCR-style functions to search for the video segments.

  42. Conventional Approach 1.Download So 2.Download S1 while playing S0 3. Download S2 while playing S1 . . . Advantage: Reduce wait time Disadvantage: Unsuitable for searching video

  43. Search Techniques • Use extra preview files to support the preview function • Requires more storage space • Downloading the preview file adds delay • Use separate fast-forward and fast-reverse files to provide the VCR-style operations • Requires more storage space • Server can become a bottleneck

  44. Challenges How to download the preview frames for FREE ? No additional delay No additional storage requirement How to support VCR operations without VCR files ? No overhead for the server No additional storage requirement

  45. 2PSM – Preview Phase

  46. 2PSM – Playback Phase t

  47. Remarks 1.It requires no extra files to provide the preview feature. 2.Downloading the preview frames is free. 3.It requires no extra files to support the VCR functionality. 4.Each client manages its own VCR-style interaction. Server is not involved.

  48. 2PSM Video Browser

More Related