1 / 24

A Measurement Study of a Peer-to-Peer Video-on-Demand System

A Measurement Study of a Peer-to-Peer Video-on-Demand System. Bin Cheng 1 , Xuezheng Liu 2 , Zheng Zhang 2 and Hai Jin 1 1 Huazhong University of Science and Technology 2 Microsoft Research Asia IPTPS 2007, Feb. 28 2007. Motivation. VoD is every coach potato’s dream

Download Presentation

A Measurement Study of a Peer-to-Peer Video-on-Demand System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Measurement Study of a Peer-to-Peer Video-on-Demand System Bin Cheng1, Xuezheng Liu2, ZhengZhang2 and Hai Jin1 1Huazhong University of Science and Technology 2Microsoft Research Asia IPTPS 2007, Feb. 28 2007

  2. Motivation • VoD is every coach potato’s dream • Select anything, start at any time, jump to anywhere • Centralized VoD is costly • Servers, bandwidth, contents () • P2P VoD is attractive, but challenging: • Harder than streaming: no single stream; unpredictable, multiple “swarms” • Harder than file downloading: globally optimal (e.g. “rarest first”) policy inapplicable • VoD is a superset of file downloading and streaming

  3. Main Contribution • Detailed measurement of a real, deployed P2P VoD system • What do we measure? • E.g. What does it mean that a system delivers good UX? • How far off are we from an ideal system? • How does users behave? • Etc. Etc… • Problems spotted • There is a great tension between scalability and UX • Network heterogeneity is an issue • Is P2P VoD a luxury that poor peers cannot afford?

  4. Outline • Motivation • System background: GridCast • Measurement methodology • Evaluation • Overall performance • User behavior and UXexperience • Conclusions

  5. channel list Initial neighbor list GridCast Overview source tracker • Tracker server • Index all joined peers • Source server • Stores a copy for every video file • Web portal • Provide channel list • Peer • Feed data to player • Cache all fetched data of the current file • Exchange data with others web

  6. One Overlay per Channel • Finding the partners • Get the initial content-closer set from the tracker when joining • Periodically gossip with some near- & far-neighbors (30s) • Look up new near-neighbors from the current neighbors when seeking • Refresh the tracker every 5minutes t

  7. Scheduling (every 10s) Current position next 200 seconds next 10 seconds Feed to the player Fetch the next 200 seconds from partners (if they have them) Fetch the next 10 seconds from the source server if no partners have them If bandwidth budget allows, fetch the rarest anchor from the source server or partners

  8. Anchor Prefetching • Anchors are used to improve seek latency • Each anchor is a segment of 10 seconds • Anchors are 5 minutes apart • Playhead adjusted to the nearest anchor (if present) 10s 5 Minutes

  9. DataSet Summary

  10. System Setup • GridCast was deployed since May 2006 • The tracker server and the Web server share one machine • One source server with 100Mb, 2GB Memory and 1 TB disk • Popularity keeps on climbing up; in Dec 2006 – • Users : 91K; sessions: 290K; total bytes from server: 22TB • Peer logs collected at the tracker (30s) • Latency, jitter, buffer map and anchor usage • Sep-log and Oct-log w/o and w/ log, respectively • Just a matter of switch the codepath as the peer joins in • The source server keeps other statistics (e.g. total bytes served)

  11. Strong Diurnal Pattern • Hot time vs. cold time • Hot time (10:00 ~24:00) • Cold time (0:00 ~ 10:00) • Two peaks • After lunch time & before midnight • Higher at weekends or holidays

  12. Scalability • Idealmodel: only the lead peer fetches from the source server • cs model: all data from the source server Significantly decreases the source server load (against cs), especially in hot time. Follows quite closely the ideal curve. # of active channel increase 3x from cold to hot – the long tail effect!

  13. Why? Understand the Ceiling • Utilization = data from peers / total fetched data • Calculated from the snapshots • For the ideal model, utilization = (n-1)/n • n is # of users in a session; or concurrency • GridCast achieves the ideal when n is large

  14. Why do we fall short (when n is small) • The peer cannot get the content if: • It’s only available from the server (missing content); caused by random seeks • It exists in disconnected peers; caused by NAT • Its partners do not have enough bandwidth missing content dominates for those unpopular files

  15. UX: latency • Startup Latency ( 70% < 5s, 90% < 10s ) • Seek latency ( 70% < 3.5s, 90% < 8s ) • Seek latency is smaller: • There is a 2-second delay to create TCP connections with initial partners • Short seeks hit cached data

  16. UX: jitter • For sessions with 5 minutes, 72.3% has not any jitter • For sessions with 40 minutes, 40.6% has not any jitter • Avg. delayed data: 3~4%

  17. Reasons for Bad UX • Network capacity • CERNET to CERNET: >100KB/s • Non-CERNET to Non-CERNET: 20~50KB/s • CERNET to Non-CERNET: 4-5KB/s • Bad UX in Non-CERNET region might have prevented swarm to form

  18. Reasons for Bad UX (cont.) • Server stress and UX is inversely correlated • Hot time -> lots of active channels -> long tail -> high server stress -> bad UX • Most pronounced for movies at the tail (next slide)

  19. UX Correlation with Concurrency • Higher concurrency: • Reduces both startup and seek latencies • Reduces amount of jitters • Getting close to that of cold time

  20. User Seek Behavior • Seek behavior (Without anchor) BACKWORAD:FORWARD ~= 3:7 BACKWARD  FORWARD  Short seeks dominate (80% within 500seconds)

  21. Seek Behavior vs. Popularity • Fewer seeks in more popular channels • More popular channels usually have longer sessions • So: stop making bad movies 

  22. Benefit of Anchor Prefetching • Significant reduction of seek latency • FORWARD seeks get more benefit (seeks < 1s jump from 33% to 63%) • “next-anchor first” is statistically optimal from any one peer’s point of view • “rarest-first” is globally optimal in reducing the load of the source server (sees 30% prefetched but unused

  23. Conclusions • A few things are not new: • Diurnal pattern; the looooooooong tail of content • A few things are new: • Seeking behaviors (e.g. 7:3 split of forward/backward seeks; 80% seeks are short etc.) • The correlation of UX to source server stress and concurrency • A few things are good to know: • Even moderate concurrency improves system utilization and UX • Simple prefetching helps to improve seeking performance • A few things remain to be problematic • The looooooong tail • Network heterogeneity • A lot remain to be done (and are being done) • Multi-file caching and proactive replication

  24. http://grid.hust.edu.cn/gridcast • http://www.gridcast.cn Thank you! Q&A

More Related