1 / 32

EECS 122: Introduction to Computer Networks Performance Modeling

EECS 122: Introduction to Computer Networks Performance Modeling. Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720-1776. Outline. Motivations Timing diagrams Metrics Evaluation techniques.

Download Presentation

EECS 122: Introduction to Computer Networks Performance Modeling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EECS 122: Introduction to Computer Networks Performance Modeling Computer Science Division Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA 94720-1776

  2. Outline • Motivations • Timing diagrams • Metrics • Evaluation techniques

  3. Motivations • Understanding network behavior • Improving protocols • Verifying correctness of implementation • Detecting faults • Monitor service level agreements • Choosing provides • Billing

  4. Outline • Motivations • Timing diagrams • Metrics • Evaluation techniques

  5. Timing Diagrams • Sending one packet • Queueing • Switching • Store and forward • Cut-through • Fluid view

  6. Definitions • Link bandwidth (capacity): maximum rate (in bps) at which the sender can send data along the link • Propagation delay: time it takes the signal to travel from source to destination • Packet transmission time: time it takes the sender to transmit all bits of the packet • Queuing delay: time the packet need to wait before being transmitted because the queue was not empty when it arrived • Processing Time: time it takes a router/switch to process the packet header, manage memory, etc

  7. Sending One Packet R bits per second (bps) Bandwidth: R bps Propagation delay: T sec T seconds P bits T Transmission time = P/R time Propagation delay =T = Length/speed 1/speed = 3.3 usec in free space 4 usec in copper 5 usec in fiber

  8. Sending one Packet: Examples P = 1 Kbyte R = 1 Gbps 100 Km, fiber => T = 500 usec P/R = 8 usec T >> P/R T P/R time T << P/R T P = 1 Kbyte R = 100 Mbps 1 Km, fiber => T = 5 usec P/R = 80 usec P/R time

  9. Queueing • The queue has Q bits when packet arrives  packet has to wait for the queue to drain before being transmitted Capacity = R bps Propagation delay = T sec P bits Q bits Queueing delay = Q/R T P/R time

  10. 2 Kb 1.5 Kb 0.5 Kb Queueing Example P bits Q bits P = 1 Kbit; R = 1 Mbps  P/R = 1 ms Time (ms) Packet arrival 0.5 7 7.5 0 1 Delay for packet that arrives at time t, d(t) = Q(t)/R + P/R Q(t) 1 Kb Time packet 1, d(0) = 1ms packet 2, d(0.5) = 1.5ms packet 3, d(1) = 2ms

  11. Switching: Store and Forward • A packet is stored (enqueued) before being forwarded (sent) 5 Mbps 100 Mbps 10 Mbps 10 Mbps Receiver Sender time

  12. Store and Forward: Multiple Packet Example 5 Mbps 100 Mbps 10 Mbps 10 Mbps Receiver Sender time

  13. Switching: Cut-Through • A packet starts being forwarded (sent) as soon as its header is received R2 = 10 Mbps R1 = 10 Mbps Receiver Sender Header time What happens if R2 > R1 ?

  14. e(t) Q(t) Fluid Flow System • Packets are serviced bit-by-bit as they arrive Q(t) = queueing size at time t a(t) – arrival rate e(t) – departure rate Q(t) Rate or Queue size a(t) t

  15. Outline • Motivations • Timing diagrams • Metrics • Throughput • Delay • Evaluation techniques

  16. Throughput • Throughput of a connection or link = total number of bits successfully transmitted during some period [t, t + T) divided by T • Link utilization = throughput of the link / link rate • Bit rate units: 1Kbps = 103bps, 1Mbps = 106bps, 1 Gbps = 109bps [For memory: 1 Kbyte = 210 bytes = 1024 bytes] • Some rates are expressed in packets per second (pps)  relevant for routers/switches where the bottleneck is the header processing

  17. RTT RTT RTT Example: Windows Based Flow Control Source Destination • Connection: • Send W bits (window size) • Wait for ACKs • Repeat • Assume the round-trip-time is RTT seconds • Throughput = W/RTT bps • Numerical example: • W = 64 Kbytes • RTT = 200 ms • Throughput = W/T = 2.6 Mbps time

  18. Throughput: Fluctuations • Throughput may vary over time Throughput max mean min Time

  19. Delay • Delay (Latency) of bit (packet, file) from A to B • The time required for bit (packet, file) to go from A to B • Jitter • Variability in delay • Round-Trip Time (RTT) • Two-way delay from sender to receiver and back • Bandwidth-Delay product • Product of bandwidth and delay  “storage” capacity of network

  20. Delay: Illustration 1 1 2 Sender Receiver Latest bit seen by time t at point 2 at point 1 Delay time

  21. Delay: Illustration 2 1 2 Sender Receiver Packet arrival times at 1 1 2 Packet arrival times at 2 Delay

  22. Little’s Theorem • Assume a system (e.g., a queue) at which packets arrive at rate a(t) • Let d(i) be the delay of packet i , i.e., time packet i spends in the system • What is the average number of packets in the system? d(i) = delay of packet i a(t) – arrival rate system • Intuition: • Assume arrival rate is a = 1 packet per second and the delay of each packet is s = 5 seconds • What is the average number of packets in the system?

  23. d(i) = delay of packet i x(t) = number of packets in transit (in the system) at time t x(t) What is the system occupancy, i.e., average number of packets in transit between 1 and 2 ? Little’s Theorem 1 2 Latest bit seen by time t Sender Receiver time T

  24. d(i) = delay of packet i x(t) = number of packets in transit (in the system) at time t x(t) Little’s Theorem 1 2 Latest bit seen by time t Sender Receiver S= area time T Average occupancy = S/T

  25. d(i) = delay of packet i x(t) = number of packets in transit (in the system) at time t S(N-1) P d(N-1) x(t) Little’s Theorem 1 2 Latest bit seen by time t Sender Receiver S(N) S= area time T S = S(1) + S(2) + … + S(N) = P*(d(1) + d(2) + … + d(N))

  26. d(i) = delay of packet i x(t) = number of packets in transit (in the system) at time t S(N-1) P d(N-1) x(t) Average occupancy Average delay Average arrival time Little’s Theorem 1 2 Latest bit seen by time t Sender Receiver S(N) S= area time T S/T = (P*(d(1) + d(2) + … + d(N)))/T = ((P*N)/T) * ((d(1) + d(2) + … + d(N))/N)

  27. d(i) = delay of packet i x(t) = number of packets in transit (in the system) at time t Little’s Theorem 1 2 Latest bit seen by time t Sender Receiver S(N) S(N-1) a(i) d(N-1) x(t) S= area time T Average occupancy = (average arrival rate) x (average delay)

  28. Outline • Motivations • Timing diagrams • Metrics • Evaluation techniques

  29. Evaluation Techniques • Measurements • gather data from a real network • e.g., ping www.berkeley.edu • realistic, specific • Simulations: run a program that pretends to be a real network • e.g., NS network simulator, Nachos OS simulator • Models, analysis • write some equations from which we can derive conclusions • general, may not be realistic • Usually use combination of methods

  30. Analysis • Example: M/M/1 Queue • Arrivals are Poisson with rate a • Service times are exponentially distributed with mean 1/s • s - rate at which packets depart from a full queue • Average delay per packet: T = 1/(s – a) = (1/a)/(1 – u), where u = a/s = utilization Numerical example, 1/a = 1ms; u = 80%  Q = 5ms s a

  31. Simulation • Model of traffic • Model of routers, links • Simulation: • Time driven: X(t) = state at time t X(t+1) = f(X(t), event at time t) • Event driven: E(n) = n-th event Y(n) = state after event n T(n) = time when even n occurs [Y(n+1), T(n+1)] = g(Y(n), T(n), E(n)) • Output analysis: estimates, confidence intervals

  32. Evaluation: Putting Everything Together Abstraction • Usually favor plausibility, tractability over realism • Better to have a few realistic conclusions than none (could not derive) or many conclusions that no one believes (not plausible) Reality Model Plausibility Derivation Tractability Prediction Conclusion Hypothesis Realism

More Related