370 likes | 484 Views
P2PMoD Peer-to-Peer Movie-on-Demand. GCH1 Group members Cheung Chui Ying Lui Cheuk Pan Wong Long Sing Supervised by Professor Gary Chan. Presentation flow. Introduction System Design Results Conclusion Q&A and Demo. Technical Challenges. Asynchronized Play Time
E N D
P2PMoDPeer-to-PeerMovie-on-Demand GCH1 Group members Cheung Chui Ying Lui Cheuk Pan Wong Long Sing Supervised by Professor Gary Chan
Presentation flow • Introduction • System Design • Results • Conclusion • Q&A and Demo
Technical Challenges • Asynchronized Play Time • Movie-on-Demand is not TV Program Broadcast • Viewers start watching at different time • Peer Dynamics • Network topology might changes over time • Viewers might go on and off • Interactivity • Support for pause and jump
Related Work • Traditional Server-to-Client • Server loading grows linearly Not scalable • Multicasting • Special network support needed • Interactivity is not supported • BitTorrent • Unpredictable download order Cannot start before you finish downloading • Interactivity is not supported
What is P2PMoD? It is a peer-to-peer (P2P) based interactivemovie streaming system that brings movies to your home • Scalable • Low server bandwidth requirement • Decentralized control • Support for user interactivity • Resilience to node/link failure • Short playback delay
Why is P2PMoD important? • Overcome the limitation of the server-to-client movie streaming architecture • Shape the future of movie watching experiences • Commercial deployment: Help strike on illegal movie downloading by BT
Director System Architecture: PRIME GUI Control RTSP Off-the-shelf Media Player RTSP Server RTP Internal Logic Statistic DHT Buffering Communication Buffer
Director Can use any RTSP-compatible media player RTSP Server RFC 2326 RTSP Server RTSP Protocol Commands RTP Internal Logic Movie data Off-the-shelf Media Player
Packetized Movie Stream Movie Packetizer Movie in compatible format 0 1 2 frames RTP packets RTP Packetizer • Can play on any RTP-compatible media player • Abstraction • No change is needed for PRIME to support different movie format RFC 2250 0ms 0 1000ms 164452 2000ms 299501 Index file
Director Backend • Responsible for the actual movie data retrieval process • Provide programming interface for stream management and interactivity control • Implementation Goal • Scalable and Fastcollaboration between peers • EfficientMinimize control communication overhead
Director Backend Implementation • Use the concept of virtual time slot to find potential parents • Use a DHT to achieve decentralized control communication
slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 slot 1 slot 2 slot 3 slot 4 slot 5 slot 6 slot 7 Moving virtual timeslot Movie Length 00:03:00 00:00:00 00:06:00 00:09:00 00:12:00 00:18:00 00:15:00 00:21:00 00:24:00 00:00:00 Start 00:42:39 End Time since Publishing: • The time boundary keeps advancing along with the real time. • Peers will stay on the same slot once started playing, unless user seeks to another position. • Peers in the same or earlier virtual time slot can help us in streaming. • How to identify these potential parents?DHT comes into play
DHT Key • We construct <movie hash, virtual time slot, random number> as the DHT key <titanic, 1, 91> <titanic, 2, 34> <mi3, 6, 99> <mi3, 5, 65> <titanic, 2, 72> <mi3, 1, 2> <titanic, 3, 23> <matrix, 4, 71> <matrix, 2, 2> <matrix, 3, 82> <matrix, 3, 12>
How to retrieve the data? • Implemented 2 versions of director • Both utilized FreePastry as the DHT • Initial version • Movie data are carried on Scribe • Scribe: an application-level multicast infrastructure built on top of FreePastry • Revised version • Out-of-band transfer • Employ a multiple parents scheme to transfer movie data
Director: Initial Version Publisher By the nature of DHT, usually it takes some hops for node A to contact node B. But that also means, sometimes it have to go through other off-topic nodes. Clients subscribe the slot they interested in. i.e. Slots covered by pre-buffer range Slot 6 Members One node would be the root determined by it’s ID. Slot 7 Members Slot 7 (00:18:00) Topic Root By the nature of DHT, Slot root nodes are all around the ring, uniformly distributed. Slot 6 (00:15:00) Topic Root
DHT IP of potential parents Myself Parents 1 Parents 2 Parents 3 Movie data Director: Revised Version • Direct data connection contrary to multi-hops transfer overlay in Scribe • less likely to have problem induced by link failure • Faster, due to reduced IP and processing overhead • If the parents jump, the child can still stream from other parents smoothly – unaffected • Peer could schedule frame request intelligently to achieve load balancing
Finding Parents • Recall that each nodes carry an IP list of its immediate N neighbors. • By searching/routing the message to the <Movie and Slot>, the node could returns us a list of potential parents.
Director: Scheduling • With the use of buffer map, that shows the frame availability of one’s node… • Continuity: Fetch the frames with the closest playback deadline first • The streaming is smooth • Load Sharing: Fetch the frameswhich are possessed by the least number of nodes first • To obtain the rare pieces for redistribution • To share the load for the peers holding these pieces • Efficiency: Stream from multiple parents at the same time
Results • Deployment of P2PMoD on 71 nodes in PlanetLab • Configuration: 1 server and 70 peers • 40KBps stream for 10 minutes • Measurement Metrics: • User Experience • Efficiency
Results– User Experience • Measures Continuity • Playback delay • Time required to start the stream • Stall Occurrences • Number of times the stream pauses to buffer more data • Stall Ratio • Ratio of paused time to streaming time
Results– User Experience • Playback delay • Over 90% has < 6 seconds delay • Stall Occurrences • Over 90% has < 2 occurrences • Stall Ratio • Over 90% has < 3% of total time
Results – Efficiency • Peer • Overhead caused by control messages • Server • Bandwidth required
Results –Efficiency • Peer • Ratio of stream data to all data input: 90% • Server • Data output rate: 275KBps • Output bandwidth equivalent to 7 streams • Use 10% bandwidth of traditional server-client model
Practical Issue • Network Traversal • Router and NAT is common • Until IPv6 lands… • Universal Plug and Play, Hole Punching • RTSP and RTP compatibility • Glitches are common and expected
Network Positioning • GNP, Vivaldi could potentially be used • Map network latency to Rn coordinate • Even with ninf, never perfect due to triangular inequality violation • GNP: Landmark selection and reselection • Vivaldi: No fixed reference, coordinates are updated continuously (spinning) • Ping time does not reflect transfer rate
Future Work • Fixed data cache instead of moving slot • Parents interactivity would not affect availability • Searching / refreshing next slot parents could be slow • Frames popularity • More movie formats, handheld devices to be supported • Error correction code
Conclusion • Peer-to-peer is the way to go, to make use of users’ increasing bandwidth and reducing server resource • PRIME, a working P2P MoD implementation • Workload reduced by adopting open standard and using off-the-shelf player
Thank You • Questions? • Demonstration
Pastry: Ring 0x0002 0x22AF 0xDF41 0xCB95 0x3529 0xA125 0x591A 0x9A92 0x62C8 0x8392 0x7F52
Pastry: Routing Knowledge Leaf Set N immediate neighboring nodes 0x0002 0x22AF 0xDF41 0xCB95 0x3529 0xA125 0x591A Routing Table 0x9A92 0x62C8 0x8392 0x7F52
Pastry: Object Storage Object is duplicated to N immediate neighboring nodes 0x0002 0x22AF 0xDF41 0xCB95 0x3529 0x3530 0xA125 0x591A Routing Table 0x9A92 0x62C8 0x8392 0x7F52
PRIME? • PRIME stands for Peer-to-peerInteractive Media-on-demand