1 / 25

Ns Simulation Final presentation

Ns Simulation Final presentation. Stella Pantofel 304026354 Igor Berman 313879942 Michael Halperin 317321982. Simulation of TCP Reno vs. TCP SACK over the network containing congested satellite or usual link, Using: Explicit Congestion Notification(ECN) and Random Early Detection(RED).

lukas
Download Presentation

Ns Simulation Final presentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ns SimulationFinal presentation Stella Pantofel 304026354 Igor Berman 313879942 Michael Halperin 317321982

  2. Simulation of TCP Reno vs. TCP SACK over the network containing congested satellite or usual link, Using:Explicit Congestion Notification(ECN)andRandom Early Detection(RED)

  3. The outline of the project Using NS-2 simulator, we simulate a congested network that uses RED with ECN with following options: • TCP Reno / TCP SACK • Different RED threshold parameters . • Variable number of connections. • The network will have usual links with one of the links congested (either satellite or usual one). We will supply results (graphs) that compare the throughputs.

  4. TCP SACK Overview • Selective acknowledgement (SACK) is a TCP option designed for adopting the sender rate to the actual network conditions better. • Main purpose is increasing the throughput when multiple packets are lost from the same window. • Main idea: receiver tells the sender not only the next in-sequence expected byte, but also a range of bytes received out-of-order. It replies with a bit-mask of packets received rather than the last segment that was received in a continuity. • SACK uses the Option field in the TCP header. • It is invoked only if both sides support it. • Number of timeouts with SACK experienced during the connection is dramatically reduced .

  5. Active Queue Management • Queue management algorithms’ purpose • manage the length of packet queues by dropping packets • detect congestion before the router queue overflows. • Example: Random Early Detection(RED) • Drops packets when the buffer is full • Uses probabilistic drops of packets. It monitors the average queue size and randomly chooses connections to notify of the congestion • (average queue size) < ( minimal threshold ) => no packets are dropped. • (average queue size) > ( maximal threshold) => every arriving packet is dropped. • (minimal threshold) < (average queue size) < (maximal threshold) => each arriving packet is dropped with a probability that is a function of the average queue size.

  6. Explicit Congestion Notification (ECN) • Routers may mark packets instead of dropping them. • ECN-Capable Transport bit is set by the data sender to indicate that the end points of the transport protocol are ECN capable. CE (Congestion Experienced) bit is set by the router to indicate congestion to the end points. • Upon the receipt of a single CE packet the congestion control algorithms followed at the end systems are the same as the response to the loss of a single packet.

  7. Simulated Network N sources permanently send ftp data to terminal2. From each source there is one flow. Single simulation lasts 60 seconds. N is increased for next simulation, until it is 60.

  8. Topology Definitions • When simulating congestion over regular links, delay of Terminal1-Satellite and Satellite-Terminal2 links is 5ms (instead of 125 ms). • Buffer capacity of Terminal1 - 100 packets. • Buffer capacity of Bridge – 100 packets. • Starting time for each connection varies uniformly • between 0 and 1 seconds. • Bandwidth of each source’s link to Bridge is random and varies • between 3 and 6Mb. • Same about delay – it varies between 3 and 6 ms.

  9. Simulating satellite links in NS2 version 2.1b7a • The available version of NS2 simulator (2.1b7a) doesn’t allow combining satellite links with regular links in one simulation. Therefore, we are trying to simulate satellite links using wired links with long propagation delay and high rate of loss error (if needed). To check validity of the above simulation we conducted several experiments over this topology .

  10. Satellite link vs. Regular link with long delay With DropTail With RED

  11. Satellite link vs. Regular link with long delay (con.) • These graphs describe the throughputs of the satellite link vs. a regular link as a function of time. As we can see the throughput is almost identical. • From this we conclude that despite minor differences, the overall behavior is very similar in both cases. • Based on the results we believe that the use of regular link with long delay as a replacement for geo-stationary satellite link is valid, and that is what we did in our simulations.

  12. TCP SACK vs. TCP Reno over satellite link and regular link with default RED parameters

  13. TCP SACK vs. TCP Reno over satellite link and regular link with default RED parameters Results • The average improvement , over a satellite link, achieved by using TCP Sack vs. TCP Reno is around 5% . • The average improvement, over a regular link, achieved by using TCP Sack vs. TCP Reno is less than 0.5 %. • The average throughput over a regular link in both cases is higher by approximately 20%.

  14. Red parameters • Formulas used in RED’s queue management algorithms • Average queue size: • Computation of probability for packet to be dropped/marked • The RED parameters we are playing with: • minth(threshold_)- the minimum queue size threshold, default – 5. • maxth (maxthreshold_) - the maximum queue size threshold – default 15. • wq(q_weight_) - the weight factor in computing the the average • queue size - default 0.002 . • maxp(1/linterm)- the probability to be dropped when the average queue • size equals maxth – default 0.1 (linterm = 10)

  15. Proposed Settings For RED Parameters • We increased the wq parameter relatively to its default value, so that it follows more closely the current queue size • maxth was set to a considerably high value compared to queue buffer size. Once there is a heavy congestion, we are trying to absorb it, because sources will be aware of it only in a very long time as a result of a considerably large delay on the links. • minth was set to a considerably low value – increasing the chance for early “unforced” drop and slowing down some of connections before real congestion is experienced. • maxp was decreased relatively to the default value, so that the rate of drops when the average queue size is between minth and maxth is not too high.

  16. Improving throughput of TCP Reno over satellite link by changing RED parameters

  17. Improving throughput of TCP SACK over satellite link by changing RED parameters

  18. TCP Reno vs. TCP Sack with default RED and changed parameters over satellite link

  19. How errors harm the throughput?

  20. How errors harm the throughput? • Low loss rate of 0.0001% doesn’t influence the throughput significantly, as only a few packets are dropped during the simulation (if any). • Medium loss rate of 0.1% causes the deterioration of the throughput when the number of sources is small. As the number of sources increases so is the throughput , because when any connection is affected by error, its load on the links is reduced and others can use available bandwidth. • High loss rate of 1% results in enormous degradation of throughput , and even increasing number of sources doesn’t allow to reach previous level.

  21. Error of 0.0001% Sack improves Reno by factor of 1.0493956 This error rate means that on average only 1 packet out of 1000000 is lost , and this is negligible when calculating average throughput over 60-second simulation. Thus,the results we got are very similar to the results without errors.

  22. Error of 0.1% Sack improves Reno by factor of 1.047830594. Compared to 0.0001% loss rate, we can see that only at the end we are getting the same throughput, i.e. the more sources we have, the better they could backup each other in a case of losses.

  23. Error of 1% Sack improves Reno by factor of 1.06464 In a case of 1% error rate we can observe severe decrease of throughput by more than 50% . However the pattern of improving as number of sources grows still exists.

  24. Errors Tuning RED parameters helps even when we have errors on the links,however, when the loss rate is extremely high, changing Red parameters doesn’t help “much”. At the same time, as it can be observed from the last graph, relative improvement achieved by using SACK is slightly higher than it is usually. This is probably due to the fact that SACK allows to recover from several packet drops from the same window, and since we have 1% rate of losses there is a good chance that several packets from the same window will be actually dropped.

  25. Conclusions • From the presented graphs we can draw several conclusions: • TCP Sack improves the throughput and is preferable over TCP Reno. However, we must remember that there is a trade off when using TCP Sack: the processing time of a packet is longer than that of TCP Reno. • Red parameters: • We observed that for given network it is worthwhile to change the default parameters of RED so they will suite the specific configuration of the network. It produces visible improvements in throughput . • However, determining optimal values is not an easy task and depends on the characterization of the network traffic as well as on the physical characteristics of the network. • The best (by far) results are achieved while combining TCP SACK with Red parameters configured specifically for the given network. Not only RED and TCP SACK don’t interfere with each other but they are even improving each other.

More Related