1 / 91

Multicast Routing Protocols: Source-Based Trees and Group-Shared Trees

Learn about multicast routing protocols and how packets are routed in multicast networks using source-based trees and group-shared trees. Explore the different approaches and protocols used to create optimal multicast trees.

tnavarro
Download Presentation

Multicast Routing Protocols: Source-Based Trees and Group-Shared Trees

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 11 Unicast Routing Protocols (RIP, OSPF, BGP) (How the routers’ tables are filled in)

  2. MULTICAST ROUTING Now we understand what multicasting is Now let’s understand how packets are routed in the multicast case Some Objectives of multicast routing (very complex) • Each Rx of the group must get only one copy of the packet • Rx’s not belonging to the group DO NOT get a copy of the packet • The packet can not visit the same router more than once (no loops) • The route from Tx to various Rx’s must be optimal (shortest path) Lecture

  3. MULTICAST TREES • Recall in the RIP/OSPF unicast World, we converted networks to graphs in finding the optimal routes • For graphs, nodes can have both successors and predecessors • In the multicast World, networks are converted totrees • Tree has a hierarchical structure • Each node on a tree has (1) a single parent and (2) zero to multiple off-springs (children) • The root (source or Tx) of the tree is the initial node (has no parent) • A leaf (group members) of a tree has no child • Called Spanning Tree provided all nodes are connected • NOTE: show students difference between graph & tree Lecture

  4. Two Types of Trees are used for multicasting by protocols: • Source-Based Trees – a single tree is created for each Source-to-Group combination. For example, given M sources and N groups, there would be a maximum of MxN trees • Group-Shared Trees – each group has it’s own tree. Given N groups, there would be a maximum of N trees. Lecture

  5. Source-Based Tree • Given a source needs to send a packet to group-1, a certain tree is used • Given the same source needs to send a packet to group-5, a different tree is used • Challenge: (1) determining all source-to-group combinations, (2) each tree needs to be optimal Two approaches used to create optimal source-based trees: • (1st) An extension to the unicast distance vector routing we covered in regards to RIP – used by DVMRP(Distance Vector Multicasting Routing Protocol) • (2nd) An extension to the unicast link state routing we covered in regards to OSPF – used by MOSPF(Multicast Open Shortest Path First) • Another protocol called PIM-DM(Protocol Independent Multicast – Dense Mode) uses either RIP or OSPF depending on need Lecture

  6. Group-Shared Tree • Given a source (source x) needs to send a packet to group-1, a certain tree is used • Given a different source (source y) needs to send a packet to the same group, group-1, the same tree is used • If either source-x or source-y need to send to a different group, the tree would change • Note: the tree changes when the group changes – the tree remains the same for a group regardless if the source changes – the group determines the tree Two approaches used to create optimal group-shared trees: • (1st) Steiner Tree – the optimal tree has the minimum cost routes like Dijkstra’ algorithm however, instead of it being based around a source node, it is not based around any particular source (very complex and has to re-run every time the topology changes) – not really used by the Internet • (2nd) Rendezvous-Point Tree – a tree is created for each group and a single router is selected as the core or rendezvous point or root of the tree. The CBT (Core-Based Tree Protocol) and PIM-SM (Protocol Independent Multicast – Sparse Mode) use the rendezvous-point tree approach. Lecture

  7. Multicast routing protocols Lecture

  8. DVMRP • Distance Vector Multicast Routing Protocol –similar to the distance vector routing protocol we covered for the unicast case – next hop scenario. • For DVMRP, the optimal tree is not pre-defined – only the next hop How do we build a tree using the DVMRP approach ? • Use a modified “flooding” approach • Recall what flooding is: a router sends a copy of a packet out of all of it’s interfaces – all interfaces except the interface the packet came in on • Flooding will cause looping problems (ie. the same packet copy that left the router will re-visit the router) • The flooding is modified to stop the looping problem How is the flooding modified ?????? Lecture

  9. DVMRP - How is the flooding modified ?????? • Instead of forwarding copies of the packet through all interfaces (except the receiving interface), ONLY FORWARD THE PACKET IF IT CAME IN ON THE SHORTEST PATH If it comes in on the non-shortest path – drop it • This approach of only forwarding the packet if it comes in on the shortest path is called Reverse Path Forwarding (RPF) – RPF prevents looping How does the router determines if the packet came in on the shortest path ??? • Recall that the unicast routing tables contain the next hop based around the shortest path – the table has destinations, interfaces and next hops en route to destinations • If the router used the packet’s source address (instead of destination address), the router could determine the NEXT HOP and desired INTERFACE to exit en route to the packet’s source address • Punch Line: if the INTERFACE the packet arrived at, is the same INTERFACE the packet needs to take in achieving the shortest path en route to the source address – then the PACKET ARRIVED USING THE SHORTEST PATH – make sense ?? Lecture

  10. DVMRP Continuing EXAMPLE A multicast router receives a packet with source address 195.34.23.7 and destination address 227.45.9.5 from interface 2. Should the router discard or forward the packet based on the following unicast table ? SOLUTION: In interpreting the source address of 195.34.23.7 using the default mask, the router would send the packet to network 195.34.0.0 via interface 3 (not interface 2). Recall the packet came in on interface 2 – therefore, the router would DROP the packet (and not forward it) Lecture

  11. DVMRP Continuing • What RPF guarantees is: each network will receive a copy of the multicast packet WITHOUT the loop problem • What RPF doesn’t guarantee is: each network will receive ONLY ONE COPY • With the Reverse Path Forwarding approach, networks could received multiple copies (see example below) • In fixing this problem, a policy called Reverse Path Broadcasting (RPB) can be implemented. Lecture

  12. Reverse Path Broadcasting (RPB) • To eliminate networks (nodes) receiving more than one copy, ONLY THE PARENT HAS THE RIGHT TO FORWARD (this is the RPB policy) • Recall: with a tree, each node has only ONE PARENT • Therefore, if the parent is the only node that can forward, no node should receive duplicates • Policy: the router sends the packet only out of those interfaces for which it is the designated parent. • See the example  • The next question: “How is the parent determined ????” • The router with the shortest path to the source is designated as the PARENT • Recall: because routers share info with their neighbors, they can easily determine which router has the shortest path to the source Lecture

  13. RPB creates a shortest path broadcast tree (not multicast tree) from the source to each destination. It guarantees that each destination receives one and only one copy of the packet. Lecture

  14. Based on pruning Using the IGMP (Internet Group Management Protocol), each PARENT ROUTER holds a membership and knows which groups it is not responsible for. The PARENT ROUTER sends a “prune message” to it’s upstream router letting the upstream router know NOT to send any packets belonging to certain no-interest groups through that corresponding interface. That upstream router will do the same to it’s upstream router This creates a “pruning” effect in that only the packets belonging to a group are forwarded through a particular interface Based on grafting Suppose a “leaf” router (a router with NO children) had previously sent a prune message and suddenly realize it NOW INTERESTED in receiving the multicast packet The leaf router will issue a grafting message upstream and as a result, multicasting will resume Reverse Path Multicasting (RPM) • Recall RPB broadcast a packet versus multicast • How is multicasting achieved ? (1) the first packet is broadcasted no matter what, (2) the remaining packets are multicasted based on pruning and grafting • Another name for pruning and grafting is Reverse Path Multicasting (RPM) Lecture

  15. RPM adds pruning and grafting to RPB to create a multicast shortest path tree that supports dynamic membership changes. Lecture

  16. MOSPF • MOSPF stands for Multicast Open Shortest Path First • Extension of the OSPF protocol • Instead of the tree being generated gradually – it’s generated all at once – by using the link state database (recall) • With the link state database, the router can see the entire topology • Each router could then use Dijikstra’s algorithm and obtain a least cost tree for each router (or node) • For multicasting routing, we need a tree for each source/group pair • For the source/group trees, the only hosts with the particular group address are included • We do the previous by associating the unicast address with the group address – with this approach, we do the calculation the same way using the unicast address however, the associated group address dictates if the host is added to the tree or not • MOSPF is a data-driven protocol – the first time a MOSPF router sees a datagram with a given source and group address, the router calculates Dijkstra Lecture

  17. Core-Based Tree (CBT) Protocol • Is a group-shared protocol • Autonomous systems are divided into regions and a core router or rendezvous point is used for each region • In forming a tree: • 1st: the core router or rendezvous router is selected (very complex - will not cover this process – not covered in your book as well) • 2nd: all other routers are informed of the unicast address of rendezvous router • 3rd: all routers wanting to join group sends a “join message” to the rendezvous router • 4th: the intermediate routers between the rendezvous router and Tx router record the address of the source and the interface in which the packet came into the router on • 5th: after the rendezvous has received all joined messages – the tree is formed Lecture

  18. CBT - Sending a multicast packet Now that the tree is formed, how are multicast packets sent ? Any particular source can send a multicast packet to the group by: • 1st: source (inside or outside of the shared tree) sends packet to rendezvous router (via the rendezvous router’s unicast address) • 2nd: rendezvous router then sends the packet to the group members Lecture

  19. DVMRP & MOSPF Versus CBT • For DVMRP and MOSPF, the tree is created from the root • For CBT, the tree is created starting from the leaves • For DVMRP, the tree is first made via broadcast and then pruned into a multicast tree • For CBT, initially there is no tree and then a tree is created gradually via grafting (ie. announcing to the core you want to be apart of the group) Lecture

  20. Protocol Independent Multicast – Dense Mode (PIM-DM) • PIM-DM is used in a dense multicast environment, such as a LAN environment. • PIM-DM is justifiably used when each router is involved in multicasting – therefore broadcasting is justified • PIM-DM uses reverse path forwarding, pruning and grafting techniques for multicasting Lecture

  21. Protocol Independent Multicast – Sparse Mode (PIM-SM) • PIM-SM is used in a sparse multicast environment, such as a WAN environment. • PIM-SM is used when there is a slight possibility each router is involved in multicasting – therefore NOT justifying broadcasting • PIM-SM operates more like CBT • PIM-SM allows the ability to switch between source-based tree and group-shared tree strategies Lecture

  22. Multicast Backbone (MBONE) • There are many more unicast oriented routers in the Internet than multicast routers (ie. routers able to multicast) • In creating more links between multicast routers, the concept of “tunneling” is used • Tunneling - via unicast routers, multicast routers are logically connected – in essence we create a multicast backbone in logically linking the multicast routers Lecture

  23. MBONE – How are tunnels created ? How to create a tunnel • 1st: encapsulate multicast packet inside a unicast packet (in the data field) • 2nd: the unicast intermediate routers route the packet to the next multicast router Lecture

  24. Transport Layer- Chapter 14 - UDP Transport Layer- Chapter 15 - TCP Lecture

  25. Question 1 to Students • What is the Transport Layer’s function/job/objective ????? • What are some of the tasks in implementing the “objective” Lecture

  26. In simple terms prior to the lecture, explain the difference between UDP and TCP Question 2 to Students What’s their understanding, if any, between UDP and TCP. Lecture

  27. Position of UDP in the TCP/IP protocol suite • Now we move to the Transport Layer: consist of TCP and UDP • The Transport Layer protocols function between the network operations (hop-to-hop) and the Application Programs • Recall: transport protocols are implemented at the source Tx and destination Rx • Of the 2 transport protocols, UDP is the more simpler and has less-overhead • Let’s focus on UDP Lecture

  28. UDP versus IP • Transport protocols are responsible for creating process-to-process or end-to-end communication – UDP accomplish this by using port numbers • Also, Transport protocols are responsible for flow and error control – UDP does not implement flow control however, it does implement some minimal error control by dropping packets with errors • The only 2 capabilities UDP adds to IP are: (1) process-to-process communication vs host-to-host communication – sends/receives packets at the port level and (2) some minimum error control • Recall: IP implements host-to-host Lecture

  29. Client-Server Paradigm • A very common example of process-to-process communications is the Client-Server paradigm. Recall our previous discussion • Today’s operating systems can support both multiple users and multiple programming environments • IP addresses id the local and remote hosts • Port numbers id the local and remote processes • UDP generates port number ids for client programs – called ephemeral port numbers • For the client to know exactly which server port to communicate with, universal ports numbers are used for servers, called “well-known port numbers” Lecture

  30. IANA ranges • The IANA (Internet Assigned Numbers Authority) has divided the port numbers into 3 ranges: • Well-Known ports – ports ranging from 0 to 1023 – assigned and control by IANA • Registered ports – ports ranging from 1024 to 49151 – not assigned, but can be registered with IANA in preventing duplication • Dynamic ports – ports ranging from 49152 to 65535 – not assigned or registered – ephemeral ports Lecture

  31. Socket addresses • The combination of IP address and port number is called a socket address • Have client socket address and server socket address • The IP header contains the respective IP addresses • The UDP header contains the respective port numbers Lecture

  32. User datagram format • UDP packets are called user datagrams • Has fixed sized header of 8 bytes • Source port number – the source is the device that’s sending at the time – can be either the client or the serve – port number used by the sender • Destination port number – port number used by the receiver (could be either the client or server) • Length – defines the total length of the user datagram, header plus data • The length field in the user datagram is redundant – recall that the user datagram is encapsulated into the IP datagram. • If we subtract the IP datagram’s header length from it’s total length, the remainder would be the user datagram length. • Checksum – used to detect errors over the entire user datagram Lecture

  33. UDP OPERATION UDP uses concepts common to the Transport Layer: • Connectionless Services – each user datagram travels independently from source to destination – even if the datagrams are coming from the same source and going to the same destination • Flow and Error Control – there is no flow and error control (only the checksum) – the process using UDP should provide these control mechanisms • Encapsulation –message passed to UDP with 2 socket addresses and the length of the data. UDP then adds the header. Then UDP passes it to IP. IP then adds it’s header. Then to the data link and physical layers. Decapsulation – physical layer decodes the signal into bits and pass to datalink and DL checks for errors. IP then does it’s check then pass to UDP. UDP then uses checksum to check user datagram Lecture

  34. UDP OPERATION CONT… UDP uses concepts common to the Transport Layer: • Queuing(Client) – client process sends messages to the outgoing queue using the source port specified. • When message arrive to client, UDP checks to see if an incoming queue has been created;. If not created, UDP drops the datagram • Queuing (server) – queues are associated with the well-known ports and remain open as long as the server is running Lecture

  35. UDP OPERATION CONT… UDP uses concepts common to the Transport Layer: • Multiplexing – on the sender side, several processes needing to use UDP multiplex and then send to the IP – UDP differentiate amongst the multiplexed datagrams via their assigned port numbers • Demultiplexing – on the receiving side, the datagrams are demuxed Lecture

  36. Transmission Control Protocol (TCP) Lecture

  37. Position of TCP in TCP/IP protocol suite • TCP located in the Transport Layer • Recall: UDP only adds some minimal error checking and process-to-process to IP • TCP adds connection-oriented(what does this really mean?) and reliability features on top of IP Lecture

  38. Stream Delivery Lecture

  39. Transport Layer/TCP Responsibilities • Responsible for process-to-process communications (accomplished via the use of port numbers) • Responsible for flow control – uses a “sliding window” protocol to accomplish this • Responsible for error control – uses the acknowledgment packet, time-out and retransmission to accomplish this • Responsible for providing a connection mechanism • TCP Sender: (1) application send streams to transport layer, (2) Tx makes connection with Rx, (3) breaks up stream, (4) assign overhead and number, (5) send one by one • TCP Receiver: (1) wait for all pieces to arrive, (2) error check each piece, (3) deliver error-free pieces to receiving application as a stream, (4) after entire stream has been sent to receiving application, terminate connection Lecture

  40. Stream Delivery Service Cont… • Because the Tx and Rx could operate at different rates, TCP uses buffers at the Tx and Rx. • One example of a buffer: circular array of 1-byte locations (typically in the hundreds or thousands) – buffer locations can be different sizes too • Tx has 3 types of locations: (1) empty (white), (2) sent but not acknowledged yet (gray) and (3) needing to be sent (pink). • Gray locations are recycled after an acknowledgment is received (reason for circle) • Rx has 2 types of locations: (1) empty (white) and (2) received bytes to be consumed (pink) • After the bytes are consumed – the locations a recycled (reason for circle) Lecture

  41. Stream Delivery Continued • Recall: transport layer breaks down the message into smaller pieces for the network layer – smaller pieces called “segments” • TCP adds overhead to each segment – over head deals with flow and error control • TCP segments are encapsulated into the IP datagram • Example: Segments being created from bytes in the buffers (ie. 1 segment created from 5 buffered bytes) Lecture

  42. NUMBERING BYTES • TCP keeps track of segments using numbers relating to the bytes versus the segments • Sequence number and Acknowledgment Number are used and relate to the byte • The start number is randomly generated (versus always being 0) • For example, suppose 1057 is generated and let’s say 6000 bytes need to be sent – the numbering will range from 1057 to 7056 • Byte numbering is also used for flow and error control • After the bytes are numbered, TCP assigns a sequence # to each segment – the sequence # is the first byte number of the segment • The acknowledgment # is sequence # incremented each time a byte is received (ie. take a 4-byte segment with starting sequence # 11, when byte 11 received, ack #=12, when byte 12 received, ack # = 13, etc..) Lecture

  43. Connection establishment using three-way handshake (A) Server starts by telling TCP it can accept a connection – done by doing a “passive open” (2nd) Server sends a SYN + ACK segment (2 bits set) – the ACK acknowledges the client’s SYN segment – and the SYN synchs the server with the client in the opposite direction (implementing full duplex) – the server also sends the desired window size to the client (B) When client is ready to connect – it issues an “active open” to a specific server (1st) Client sends the first segment – the SYN bit (synchronization) is set and it has a randomly generated starting sequence number – carries no data (3rd) Client sends a third segment, with the ACK bit set, to acknowledge the server’s segment – in some cases, the segment can also send the FIRST chunk of data to the server Lecture

  44. TCP - FLOW CONTROL • Flow Control – amount of data a Tx sends before receiving an acknowledgment from the Rx • Can send too much and overwhelm routers and etc.. • Can send too little and Tx stays idle too much waiting for acknowledgments • TCP defines a “window” that’s imposed on the buffer of data delivered • A sliding window is used to make transmission more efficient as well as to control the flow of data so that the destination does not become overwhelmed with data. • TCP’s sliding windows are byte oriented. Lecture

  45. Sender buffer w/o sliding window • Before covering the sliding window concept, considered no sliding window • If all of the pink locations were sent, it could over load the receiver’s buffer Lecture

  46. Receiver window • In describing the sliding window concept, must define a “Receiver Window” • Receiver Window - At any particular time, contains how many more bytes can the receiver store ? • For example, the receiver window below is 7 Lecture

  47. Sender buffer and sender window • We have flow control if the sender’s window is less than or equal to the receiver’s window. Why ? • In the example below, the sender can’t send 7 because 3 were already sent - the Rx hasn’t acknowledged these 3 yet • Therefore, the Tx can only send 4 more and not overload the Rx Sender Window size Lecture

  48. Sliding the sender window • Messages from the receiver change the position of the sender window • For example, suppose the sender sent 2 more bytes and received an acknowledgment from the Rx • If the Tx received an acknowledgment, that means the Rx has consumed bytes 200 – 202 • Now the window can slide over to encompass 203 – 209. Tx Window Lecture

  49. Expanding the sender window If the receiver consumes data faster than it receives, the size of the receiver window expands – this is relayed to the Tx and it’s window is expanded too. Vice versa, If the receiver consumes data slower than it receives, the size of the receiver window shrinks – this is relayed to the Tx and it’s window is reduced too. NOTE: if the Rx is full, the Tx closes it’s window until Rx window is non-zero Lecture

  50. In TCP, the sender window size is totally controlled by the receiver window value.However, the sender’s actual window size can be smaller if there is congestion in the network. Lecture

More Related