320 likes | 461 Views
CS 414 – Multimedia Systems Design Lecture 38 – P2P Streaming (Part 2). Klara Nahrstedt. Administrative . MP3 deadline Saturday May 3 , 5pm If you have bonus days (max 2 days) – you can deliver MP3 by May 5, 5pm Demonstrations of MP3 , May 5, 5-7pm
E N D
CS 414 – Multimedia Systems DesignLecture 38 – P2P Streaming (Part 2) Klara Nahrstedt CS 414 - Spring 2014
Administrative • MP3 deadline Saturday May 3, 5pm • If you have bonus days (max 2 days) – you can deliver MP3 by May 5, 5pm • Demonstrations of MP3, May 5, 5-7pm • Top four groups will be decided Monday, May 5 at 7pm (via email, also posted on the newsgroup/classwebsite) - these groups will compete in front of the judges on Tuesday, May 6 CS 414 - Spring 2014
Administrative • Homework 2 is posted today • Deadline May 7, Wednesday midnight 11:59pm • Peer Evaluations – due Friday, May 9, midnight • Peer Evaluation Form and Explanation - available on the class website • Submit your Peer Evaluation to klara@illinois.edu • Note: if you do not submit your peer evaluations, you get 0 for self-evaluation and 100% for your group mates. • ¼ Unit projects – due Friday, May 9 midnight (if you need more time, arrange deadline with instructor) CS 414 - Spring 2014
Outline Summary of P2P File Sharing P2P Streaming CS 414 - Spring 2014
Gnutella, searching for files Flood query ( ) Ignore repeated messages Answer if local match Query hit sent using reverse path ( ) Establish connection and fetch file ( ) “jazz” “jazz”?? Query message: <id, QUERY, ttl, hops, payload length, min speed, keywords> Query hit message: <id, QUERY HIT, ttl, hops, payload length, num hits, port, ip, speed, (fileindex, filename, filesize), servent id> CS 414 - Spring 2014
Gnutella, maintaining overlay (peer management) Neighbor list: • “A” “V” • Periodically flood ping ( ) • Pong sent using reverse path( ) • Update neighbor list with received pongs • Why periodically? A “X” V X Ping: <id, PING, ttl, hops, payload length (zero)> Pong: <id, PONG, ttl, hops, payload length, port, ip, num. files, num. KBs> CS 414 - Spring 2014
Gnutella, maintaining the overlay (peer management) Neighbor list: • “A” • “V” • “X” V X Peers can leave or fail at any time – P2P systems can have high churn rate!. CS 414 - Spring 2014
Gnutella – Example of Unstructured P2P Servents (peers) Peer pointer • Peers store: • Their files • Peer pointers (peer management) Distributed Peer Management (Using Ping/Pong) Distributed File Management (Maintain my own files and search of others via flooding) CS 414 - Spring 2014
Gnutella: some issues Ping/Pong constitutes 50% of traffic Flooding causes excessive traffic Repeated searches with same keywords Large number of freeloaders (70% of users in 2000) CS 414 - Spring 2014
DHTs (Distributed Hash Tables) – Example of Structured P2P • Hash table allows these operations on object identified by key: • Insert • Lookup • Delete • Distributed Hash Table – same but in a distributed setting (object could be files) CS 414 - Spring 2014
DHT performance comparison CS 414 - Spring 2014
Streaming from servers clients … servers … Video service • Problem: Bandwidth at video service and number of servers have to grow with demand • Flash crowds have to be taken into account CS 414 - Spring 2014
P2P Streaming • P2P Streaming is a response to elevate the demand on bandwidth in video servers • Issue: In-band and out-band bandwidth of peers (in/out bandwidth) • P2P Streaming can distribute the bandwidth demand across peers • Issue: find peer that has enough out-band BW • P2P Streaming will require management • Peer management • Chunk management • P2P Streaming will require streaming distribution protocols CS 414 - Spring 2014
Peer Management for P2P Streaming • One could use • Centralized peer management • Live source keeps peer list, i.e., each peer registers with live source • Separate session server keeps peer list, i.e., each peer registers with session server • Distributed peer management • Peers advertise among each other and create peer list of its neighbors (Gnuttela-like) CS 414 - Spring 2014
Chunk Management for P2P Streaming • Video is divided into chunks in P2P streaming • Chunk size can be size of GOP (group of pictures) • Chunk size can be agnostic to video semantics (e.g., 4K, or 8K or 32K chunk sizes). • Peers hold chunks (not files) in P2P streaming • Need chunk management • Centralized chunk management • Server (live source or session manager) keeps information which peer has what chunks • Distributed chunk management • Peers keep their own chunk table and other peers send queries to neighbors for requested chunks CS 414 - Spring 2014
P2P Streaming Peers watching stream Live stream source (could be a member of the p2p network) P2P network • Use the participating node’s bandwidth • More nodes watching stream = more shared bandwidth • App-level multicast • How? CS 414 - Spring 2014
P2P Streaming • Common arrangements to multicast the stream • Single Tree • Multiple Tree • Mesh-based • All nodes are usually interested in the stream • They all have to deal with node dynamism (join/leave/fail/capacity-changes) CS 414 - Spring 2014
Streaming in a single tree Frames coder packets 3 2 1 Source 3 3 2 2 1 (using RTP/UDP) 1 CS 414 - Spring 2014
Single Tree Source Nodes send as many copies of a data packet as they have children … … … Peers interested in the stream organize themselves into a tree CS 414 - Spring 2014
Joining the tree “Parent?” “Try one of my children” • Find a node with spare capacity, then make it parent • If contacted node lacks capacity, pick child according policy • Random child or • Round robin or • Child closest in physical network to joining node CS 414 - Spring 2014
Leaving the tree or failing grandfather Ex-parent Orphan children after parent leaves • Orphan nodes need a new parent • Policies for new parent • Children pick source • Subtree nodes pick source • Children pick grandfather • Subtree nodes pick grandfather • …then repeat join procedure CS 414 - Spring 2014
Single tree issues Leaves do not use their outgoing bandwidth Packets are lost while recovering after a parent leaves/fails Finding unsaturated peer could take a while Tree connections could be rearranged for better transfer CS 414 - Spring 2014
Multiple Trees • Are nodes 1, 2, 3 receiving the same data multiple times? • No, we stripe chunks across multiple nodes • Example: node 1 receives chunk 1, node 2 receives chunk 2, node 3 receives chunk 3 from source and then they distribute to other nodes in the subtree their chunks Source 1 3 2 1 3 2 1 3 1 2 2 3 2 3 1 2 1 3 CS 414 - Spring 2014 Approach: a peer must be internal node in only one tree, leaf in the rest
Multiple Trees – Other Approach: Multiple Description Coding (MDC) Packets for description 0 Frames coder 30 20 10 … Packets for description n 3n 2n 1n • Each description can be independently decoded (only one needed to reproduce audio/video) • More descriptions received result in higher quality CS 414 - Spring 2014
Streaming in multiple-trees using MDC 31 30 21 20 11 10 (using RTP/UDP) 1 3 2 3 4 1 2 CS 414 - Spring 2014 Assume odd-bit/even-bit encoding --description 0 derived from frame’s odd-bits, description 1 derived from frame’s even-bits
Multiple-Tree Issues • Complex procedure to locate a potential-parent peer with spare out-degree • Degraded quality until a parent found in every tree • Static mapping in trees, instead of choosing parents based on their (and my) bandwidth • An internal node can be a bottleneck CS 414 - Spring 2014
Mesh-based streaming (mesh uses MDC) Description 0/1/2 Description 0/1/2 Description 0 Description 1,2 (Nodes are randomly connected to their peers, instead of statically) Description 0,1,2 Description 1 • Basic idea • Report to peers the packets that you have • Ask peers for the packets that you are missing • Adjust connections depending on in/out bandwidth CS 414 - Spring 2014
Content delivery (Levels determined by hops to source) Description 0 Description 2 Description 1 1 2 3 4 6 5 7 8 9 10 11 12 13 14 15 16 17 (1) Diffusion Phase ( ) (2) Swarming Phase ( ) CS 414 - Spring 2014
Diffusion Phase Have segment 0 Segment 0 Segment 1 … 22 40 30 20 10 Send me Segment 0 12 … 3 … 42 32 22 12 (during period 0) 3 Have segment 0 22 12 Send me Segment 0 9 (during period 1) • As a new segment (set of packets) of length L becomes available at source every L seconds • Level 1 nodes pull data units from source, then level 2 pulls from level 1, etc. • Recall that reporting and pulling are performed periodically CS 414 - Spring 2014 (drawings follow previous example)
Swarming Phase 10 2 20 21 7 11 9 21 11 10 11 13 14 15 12 16 17 21 11 20 10 CS 414 - Spring 2014 • At the end of the diffusion all nodes have at least one data unit of the segment • Pull missing data units from (swarm-parent) peers located at same or lower level • Can node 9 pull new data units from node 16? • Node 9 cannot pull data in a single swarm interval (drawings follow previous example)
Conclusion • P2P streaming – alternative to CDN (Content Distribution Networks) networks • Examples of P2P streaming technology • PPLive • Skype CS 414 - Spring 2014
Many more details in references and source code M. Castro, P. Druschel, A-M. Kermarrec, A. Nandi, A. Rowstron and A. Singh, "SplitStream: High-bandwidth multicast in a cooperative environment," SOSP 2003. H. Deshpande, M. Bawa, H. Garcia-Molina. "Streaming Live Media over Peers," Technical Report, Stanford InfoLab, 2002. N. Magharei, R. Rejaie. "PRIME: Peer-to-Peer Receiver drIvenMEsh-Based Streaming," INFOCOM 2007. N. Magharei, R. Rejaie, Y. Guo. "Mesh or Multiple-Tree: A Comparative Study of Live P2P Streaming Approaches," INFOCOM 2007. http://freepastry.org CS 414 - Spring 2014