1 / 44

Cloud Streaming

Cloud Streaming. Jingwen Wang. Video content distribution. Nearly 90% of all the consumer IP traffic is expected to consist of video content distribution Web video like YouTube, P2P video like BitTorrent Content distribution requirements:

farrah
Download Presentation

Cloud Streaming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cloud Streaming Jingwen Wang

  2. Video content distribution • Nearly 90% of all the consumer IP traffic is expected to consist of video content distribution • Web video like YouTube, P2P video like BitTorrent • Content distribution requirements: • Scalable and secure media storage, processing and distribution • Anytime, anywhere, any device consumption • Low latency, global distribution

  3. Cloud Provides a Better way • Massive Scale • Rapid File Transfer • Low IT Costs • High Reliability • Accredited Security

  4. CloudStream • Motivation: • Current solution for deliver videos:progressivedownload via CDN • Non-adaptive codec • Video freeezes • WANT: a SVC based video proxy that delivers high-quality Internet streaming adapting to variable conditions • Video transcoding from original formats to SVC • Video streaming to different users under Internet dynamics

  5. CloudStream • Implement on one processor: • Video transcoding to SVC is highly complex and transcoding speed is relatively slow • a long duration before a user can access the transcoded video • video freezes because of unavailability of transcoded video data • To enable real-time transcoding and allow scalable support for multiple concurrent videos: • Use Cloud: CloudStream • Partition a video into clips and maps them to different compute nodes in order to achieve encoding parallelization

  6. Concerns • Encoding parallelization: • Multiple video clips can be mapped to compute nodes at different time • First-task first-server scheme can introduce unbalanced computation load transcoding jitter • The transcoding component should not speed up video encoding at the expense of degrading the encoded video quality • Streaming jitter: • Video clips arrive at the streaming component in batches • Demand surge of network resources leads to some data not arrive at the user at the expected arrival time

  7. Metrics affecting Streaming Quality • Streaming Quality: • Access time • Transcoding and streaming latencies • Video freezes • Transcoding and streaming jitters • Video Content: • The temporal motion metric TM • The spatial detail metric SD

  8. Encoding Parallelization • SVC coding structure: • A video non-overlapping coding-independent GOPs • A picture layers • A layer coding-independent slices • A slice macro-blocks • Parallelism • Across different compute nodes: inter-node parallelism • Shared-memory address parallelism inside on compute node: intra-node parallelism

  9. Multi-level parallelization Scheme • Multi-level encoding parallelization: • GOPs: have the largest work granularity • Inter-node parallelism ! • Slices: independence, relative larger amount of work • Intra-node parallelism! • Each slice on a different CPU

  10. Intra-node Parallelism • Intra-node Parallelism • Limit the average computation time spend over the GOP to an upper bound Tth • Shorten the access time ! • The minimum number of slices encoded in parallel: Mmin

  11. Inter-node Parallelism • Inter-node Parallelism • Achieve real-time transcoding • Transcoding jitters introduced by variation of GOP encoding time • Goal: • Minimize transcoding jitters • Minimize the number of compute nodes

  12. Estimation of GOP’s Encoding Time • A multi-variable regression model • At a given encoding configuration • Train videos with different video content characteristics TM and SD to build the regression model • 90% of predicted values of the testing data are fallen within the 10% of error

  13. Problem Formulation • Problem Formulation • Based on the approximation of each GOP’s encoding time • Given Q jobs • Each job i has a deadline di and a processing time pi • Multiple nodes in parallel, each job is processed with out preemption on each machine until its completion • Lateness li can be computed as ci (actual completion time) – di • Upper bound of lateness: τ • WANT: bound the lateness of these jobs find the minimal number of machines N and minimize τ

  14. Complexity: NP-hard • Solution: • Hallsh-based Mapping • Lateness-first Mapping

  15. Hallsh-based Mapping • Hallsh-based Mapping(HM): • Set an upper bound of τ and find the minimal number of N satisfies it • Use Hallsh machine scheduling algorithm as a blackbox

  16. minMS2approx algorithm • 1. Pick ε= mini{(di - pi)/τ} • 2. Run HallSh by increasing the number of machines until the maximum lateness among all jobs satisfies <(1 + ε) *τ, and set the machine number at this point to be K • 3. HallSh will returns the scheduling results of all jobs. For a job with lateness over the upper bound on a particular machine j, move it along with all future jobs on machine K to a new machine K + j. Then compute the new completion time for all jobs on this new machine • 4. N is the number of used machines

  17. Lateness-first Mapping • Lateness-first Mapping(LFM): • Compute the minimal number of N based on the deadline of each job and minimize τ for the given N • Deciding the minimum N: • Tpic(M)*R < SG *N • Minimizing τ given N: • For the i-th job in every N jobs, compute its adjusted processing time p’i=pi – (di – d1) • Sort the n jobs by the reverse order of p’I • Schedule the job with the largest p’Ito the first available compute node, the second largest one to the second available node

  18. Test • SVC: JSVM • Environment: • Input: 64 480p video GOPs • GOP: 8 pictures • Picture: 4 temporal layers, 2 spatial layers, 1 quality layer • Up tp 4 cores on each compute node • Slices number corresponding to cores

  19. Performance Average encoding time and speedup using up to 4 cores in intra-node parallelism

  20. LFM

  21. HM

  22. Comparing LFM & HM • HM can successfully decide the appropriate compute node number and limit the transcoding jitters • HM may require greater N in order to achieve the same level of lateness constraint than LFM

  23. Cloud Download • UsingCloudUtilitiestoachievehigh-qualitycontentdistributionforunpopularvideos • Motivation: • VideocontentdistributiondominatesInternettraffic • High-qualityvideocontentdistributionisofgreatsignificance -1.highdatahealth -2.highdatatransferrate

  24. MotivationofCloudDownload • High data health • Data health: number of available full copies of the shared file in a BitTorrent swarm • Data health < 1.0 is unhealthy • Use data health to represent data redundancy level of a video file • Highdatatransferrate • Enablesonlinevideostreaming • Live&VoD

  25. State-of-the-artTechniques:CDN • CDN(ContentDistributionNetwork) • Strategicallydeployingedgeservers • Cooperatetoreplicateormovedataaccordingtodatapopularityandserverload • Userobtainscopyfromanearbyedgeserver • CDN:limitedstorageandbandwidth • Notcost-effectiveforCDNtoreplicateunpopularvideostheedgeservers • Chargedfacilityonlyservingthecontentproviderswhohavepaid

  26. State-of-the-art Techniques: P2P • P2P(Peer-to-Peer) • EndusersformingP2Pdataswarms • Datadirectlyexchangedbetweenpeers • Realstrengthshowsforpopularfilesharing • P2P:poorperformanceforunpopularvideos • Toofewpeers • Lowdatahealth • Lowdatatransferrate

  27. Neither of CDN and P2P work well in distributing unpopular videos, due to low data healthor low data transfer rate • Worldwide deployment of cloud utilities provides a novel perspective to solve the problem: • Cloud Download!

  28. High data rate ! Cloud Download Cloud

  29. Cloud Download • Firstly, a user sends video request to the cloud • Subsequently, the cloud downloads the requested video from the file link and stores it in the cloud cache • User retrieve the requested video from the cloud with hight data rate via the intra-cloud data transfer acceleration

  30. User-side energy Efficiency • Commonly download an unpopular video • A common user keeps his computer (& NIC) powered-on for long hours • Much Energy is wasted while waiting • Cloud download an unpopular video • The user can just be “offline” • When the video is ready, quickly retrieve it in short time • User-side energy efficient!

  31. Cloud Download: View Startup Delay • TheonlydrawbackofCloudDownload: • For some videos, the user must wait for the cloud to download it: • Viewstartupdelay • Thisdrawbackiseffectivelyalleviated • Bytheimplicitandsecuredatareuseamongusers • Thecloudonlydownloadsavideowhenitisrequestedforthefirsttime: • Cloud cache! • Subsequentrequestsdirectlysatisfied • Securebecauseoblivioustousers • Data reuse rate -> 87%

  32. Video request Data transfer (high data rate) Data download Data store/cache System Architecture Check cache

  33. Component Function • ISP Proxy: receive & restrict requests in each ISP • Task Manager: check cache • Task Dispatcher: load balance • Downloaders: download data • Cloud Cache: store and upload data

  34. Hardware Composition

  35. Cache Capacity Planning & Replacement Strategy • Handel 0.22M daily requests • Average video size: 379MB • Video cache duration: <7 days • Thus, C=372MB*0.22M*7= 584TB • Cache replacement strategies • 17 days trace-driven simulations • FIFO vs. LRU vs. LFU • FIFO worst, LFU best!

  36. Performance Evaluation • Dataset • Complete running log of the VideoCloud system in 17 days: Jan.1,2011 – Jan. 17, 2011 • 3.87M video requests, around 1.0M unique videos • Metrics • Data transfer rate • View startup delay • Energy efficiency

  37. Data transfer rate & View startup delay

  38. Energy Efficiency • User-side energy efficiency • E1: users’ energy consumption using common download • Eu: users’ energy consumption using cloud download • User-side energy efficiency =(E1 - Eu)/E1 = 92% • Overall energy efficiency • Ec: the cloud’s energy consumption • E2: the total energy consumption of the cloud and users, so E2= Ec+Eu • Overall energy efficiency = (E1 – E2)/E1= 86%

  39. Cloud Download application • Cloud Transcoding for mobile users • http://xf.qq.com • Mobile user submits a video linnk and the transcoding parameters to the cloud • The cloud downloads the video from Internet via cloud download • The cloud transcodes the downloaded video and transfers the transcoded video back to user

  40. References • Huang et al., Cloudstream: Delivering highquality streaming videos through a cloud-based svc proxy,  INFOCOM 2011 • Huang et al., Cloud download: using cloud utilities to achieve high-quality content distribution for unpopular videos, ACM Multimedia 2011 • http://www.slideshare.net/AmazonWebServices/aws-for-media-content-in-the-cloud-miles-ward-amazon-web-services-and-bhavik-vyas-aspera • The QQCyclone platform. http://xf.qq.com.

  41. Thank you !

More Related