300 likes | 619 Views
The Network Layer. Congestion Control Algorithms, Quality of Service & Internetworking. Leonard Jackson Jr. Shira Boatwright Kieaster Witherspoon. Congestion Control. Congestion is when too many packets present in ( a part of ) network causes packet delay and loss degrades performance.
E N D
The Network Layer Congestion Control Algorithms, Quality of Service & Internetworking Leonard Jackson Jr. Shira Boatwright Kieaster Witherspoon
Congestion Control • Congestion is when too many packets present in ( a part of ) network causes packet delay and loss degrades performance. • The network and transport layers share the responsibility for handling congestion. • The network layer ultimately determines what to do with the excess packets. • The most effective way to control congestion is to reduce the load that the transport layer is placing on the network. • In order for this to work, the network and transport layers to work together.
Congestion Control • When the number of packets a host sends into a network is within its carrying capacity, the number of delivered packets is proportional to number sent. • As the load gets close to the carrying capacity, traffic occasionally fills up the buffers inside routers and some packets are lost. • The lost packets then consume some of the capacity. • The network is now congested.
Congestion Collapse • If the network is not well designed it will experience a congestion collapse. • Congestion collapse is when performance plummets as the offered load increases beyond the capacity. • This happens because packets can be sufficiently delayed inside the network which would result in them no longer being useful when they leave the network.
Approaches to Congestion Control • When congestion is present it means that the load is (temporarily) greater than the resources ( in a part of the network) can handle. • The are two solutions to this problem, to increase the resources or decrease the load. • The basic way to avoid congestion is to build a network that is well matched to the traffic that it carries. • When there is heavy traffic on a low-bandwidth link with the most traffic being directed , congestion is likely.
Provisioning is when links and routers that are used frequently are upgraded at the earliest opportunity. • To make the most of the network capacity, routes can be figured into traffic patterns that during the day as network users wake and sleep. • Traffic- aware routing can be used to shift traffic away from heavily used paths changing the shortest path weights.
Sometimes it is not possible to increase network capacity. • In this case the only way to decrease congestion would be to decrease the load. • New connections can be blocked to prevent congestion. This is called admission control. • Load shedding can be used to force the network to discard packets that it cannot deliver. Policies are put in place to choose which packets to discard to help prevent congestion also.
Traffic-Aware Routing • The goal in traffic- aware routing is to shift away from hotspots that will be the first places in the network to experience congestion. • There are two techniques that can contribute to a successful solutions. One is multipath routing. Multipath routing is a situation where there can be multiple paths from a source to a destination. • Another technique that can be used is rout scheming. Rout scheming is to shift traffic across routes slowly enough that it is able converge.
Admission Control • One technique that is used widely to keep congestion at a maintainable level is admission control. Admission control does not set up a new virtual circuit unless the network can carry the added traffic without becoming congested. • Admission control can be used along with traffic-aware routing to consider which routes to take to avoid traffic hotspots as part of the setup procedure.
Traffic Throttling • Senders can adjust their transmission to send as much traffic as the network can readily deliver.
Quality of Service http://www.youtube.com/watch?v=i99kFCpMVVQ
Quality of Service Overprovisioning is a network with enough capacity for whatever traffic will be thrown at it
Quality of Service Issues that must be addressed to ensure Quality of Service • What applications need from the network • How to regulate the traffic that enters the network • How to reserve resources at routers to guarantee performance • Whether the network can safely accept more traffic No single technique deals efficiently with all these issues.
Application Requirements • A stream of packets from a source to a destination is a flow A flow might be all the packets of a connection in a connection-oriented network, or all the packets sent from one process to another process in a connectionless network.
Application Requirements • The needs of each flow are categorized by 4 primary parameters: bandwith, delay, jitter & loss which in all determines the QoS or Quality of Service the flow requires
Application Requirements • Variation in the delay or packet arrival times is called jitter
Traffic Shaping • Traffic shaping is a technique for regulating the average rate and burstiness of a flow of data that enters the network GOAL: Allow applications to transmit a wide variety of traffic that suits their needs, including some bursts, yet have a simple and useful way to describe the possible traffic patterns to the network
Traffic Shaping • Traffic shaping reduces congestion and thus helps the network live up to its promise Monitoring a traffic flow is called traffic policing Shaping and policing are essential for real-time data [ audio and video connections]
Traffic Shaping: Leaky & Token Buckets Calculation the length of maximum burst B+RS=MS S seconds M maximum output (bytes/sec) B bytes R arrival rate (bytes/sec)
Packet Scheduling • Algorithms that allocate router resources among the packets of a flow and between competing flows are called packet scheduling algorithms. • Resources that could potentially be reserved for different flows: • Bandwith • Buffer space • CPU cycles
Packet Scheduling: FIFO/FCFS • First-In First-Out or First-Come First-Serve is an algorithm where each router buffers packets in a queue for each output line until they can be sent and are sent in the same order as they arrived FIFO routers usually drop newly arriving packets when the queue is full. Since the newly arrived packet would have been placed at the end of the queue…this behavior is called tail drop
Admission Control • QoS guarantees for new flows may still be accommodated by choosing a different route for the flow that has excess capacity which is called QoS routing. • It is also possible to split the traffic for each destination over multiple paths to more easily find excess capacity. • Although some applications may know about their bandwidth requirements, few know about buffers or CPU cycles, so at the minimum, a different way is needed to describe flows and translate this description to router resources. • Some application are far more tolerant of an occasional missed deadline than others. The applications must choose from the type of guarantees that the network can make, whether hard guarantees or behavior that will hold most of the time. Guarantees for most of the packets are often sufficient for application, and more flows with this guarantee can be supported for a fixed capacity • Some applications may be willing to haggle about the flow parameters and others may not.
What is Internetworking • When two or more networks are connected it forms what is known as an internetwork, or more simply put an internet. * Notice the difference between internet and Internet is the capitalization of the “I” to distinguish it from other networks*
How Networks Differ • Network differences can be internal to the physical and data link layer or differences can be exposed to the network layers. • When dealing with differences exposed in the network layer, it is papering over the differences that makes internetworking more difficult that operating within a single network. • For example: • When packets sent by a source on one network must transmit one or more foreign networks before reaching the destination network, many problems can occur at the interfaces between networks. The source must be able to address the destination. To establish the destination the network may require that a new connection be set up on short notice, which causes a delay, and much overhead if the connection is not used for many more packets.
Network Connection • There are two basic choices for connecting different networks: • Building devices that translate or convert packets from each kind of network into packets for each other network • Solving the problem by adding a layer of indirection and building a common layer on top of the different networks
Tunneling • When the source and destination hosts are on the same type of network, but there different network in between, this solution is called tunneling. • Tunneling is widely used to connect isolated hosts and networks using other networks. • The resulting network is called an overlay because it has effectively been overlaid on the base network.
Internetwork Routing • Within each network, anintradomainor interior gateway protocolis used for routing and The interdomain routing protocol is call BGP (Border Gateway Protocol). • Across the networks that make up the internet, an interdomainor exterior gateway protocol is used. • Networks may all use different intradomain protocols, but they must use the same interdomain protocol.
Packet Fragmentation • Each network or link imposes some maximum size of its packets. These limits have various causes, among them: • Hardware • Operating system • Protocols • Compliance with some (inter)national standard • Desire to reduce error-induced retransmissions to some level • Desire to prevent one packet from occupying the channel too long
Packet Fragmentation cont’d • Hosts usually prefer to transmit large packets because this reduces packet overheads such as bandwidth wasted on header bytes. • A problem that appears is when a large packet wants to travel through a network whose maximum packet size is too small.