410 likes | 423 Views
Explore hybrid modeling in communication networks to validate designs, analyze protocols, and tune parameters efficiently. Learn packet-level, fluid-based, and hybrid modeling types with real-world applications.
E N D
Hybrid Control and Switched Systems Hybrid Systems Modeling of Communication Networks João P. Hespanha University of Californiaat Santa Barbara
Motivation • Why model network traffic? • to validate designs through simulation (scalability, performance) • to analyze and design protocols (throughput, fairness, security, etc.) • to tune network parameters (queue sizes, bandwidths, etc.)
Types of models • Packet-level modeling • tracks individual data packets, as they travel across the network • ignores the data content of individual packets • sub-millisecond time accuracy • computationally very intensive • Fluid-based modeling • tracks time/ensemble-average packet rates across the network • does not explicitly model individual events (acknowledgments, drops, queues becoming empty, etc.) • time accuracy of a few seconds for time-average • only suitable to model many similar flows for ensemble-average • computationally very efficient (at least for first order statistics)
Types of models captures fast dynamicseven for a small number of flow • Hybrid modeling • keeps track of packet rates for each flow averaged over small time scales • explicitly models some discrete events (drops, queues becoming empty, etc.) • time accuracy of a few milliseconds (round-trip time) • computationally efficient provide information about both average, peak, and “instantaneous” resource utilization(queues, bandwidth, etc.)
Summary • Modeling 1st pass: Dumbbell topology & simplified TCP • Modeling 2nd pass: General topology, TCP and UDP models • Validation • Simulation complexity
1st pass – Dumbbell topology f1 f1 r1 bps queue f2 r2 bps f2 rate ·B bps q(t) ´ queue size f3 r3 bps f3 Several flows follow the same path and compete for bandwidth in a single bottleneck link Prototypical network to study congestion control routing is trivial single queue B is unknown to the data sources and possibly time-varying
Queue dynamics f1 f1 r1 bps queue f2 r2 bps f2 rate ·B bps q(t) ´ queue size f3 r3 bps f3 When åf rf exceeds B the queue fills and data is lost (drops) )drop(discrete event – relevant for congestion control)
Queue dynamics f1 f1 r1 bps queue f2 r2 bps f2 rate ·B bps q(t) ´ queue size f3 r3 bps f3 transition enabling condition Hybrid automaton representation: exporteddiscrete event
Window-based rate adjustment wf (window size) ´ number of packets that can remain unacknowledged for by the destination source f destination f e.g., wf = 3 t0 1st packet sent t1 2nd packet sent t2 t0 3rd packet sent 1st packet received & ack. sent t1 2nd packet received & ack. sent t2 t3 1st ack. received )4th packet can be sent 3rd packet received & ack. sent t t wf effectively determines the sending rate rf : round-trip time
Window-based rate adjustment queuegets full longerRTT ratedecreases queuegets empty negative feedback wf (window size) ´ number of packets that can remain unacknowledged for by the destination per-packet transmission time ´ sending rate total round-trip time propagationdelay time in queueuntil transmission This mechanism is still not sufficient to prevent a catastrophic collapse of the network if the sources set the wf too large
TCP congestion avoidance • While there are no drops, increase wf by 1 on each RTT (additive increase) • When a drop occurs, divide wf by 2 (multiplicative decrease) • (congestion controller constantly probes the network for more bandwidth) TCP congestion avoidance additive increase multiplicative increase disclaimer: this is a very simplified version of TCP Reno, better models later…
TCP congestion avoidance • While there are no drops, increase wf by 1 on each RTT (additive increase) • When a drop occurs, divide wf by 2 (multiplicative decrease) • (congestion controller constantly probes the network for more bandwidth) Queuing model TCP congestion avoidance rf additive increase RTT drop multiplicative increase disclaimer: this is a very simplified version of TCP Reno, better models later…
TCP congestion avoidance • While there are no drops, increase wf by 1 on each RTT (additive increase) • When a drop occurs, divide wf by 2 (multiplicative decrease) • (congestion controller constantly probes the network for more bandwidth) TCP + Queuing model additive increase multiplicative increase disclaimer: this is a very simplified version of TCP Reno, better models later…
Linearization of the TCP model Time normalization´ define a new “time” variable t by 1 unit of t´ 1 round-trip time TCP + Queuing model additive increase multiplicative increase In normalized time, the continuous dynamics become linear
Impact-map analysis x1 x2 T t0 t1 t2 t3 additive increase additive increase additive increase multiplicative decrease multiplicative decrease ´ continuous state before the kth multiplicative decrease multiplicativedecrease x1 x2 additive increase impact map transition surface state space
Impact-map analysis x1 x2 T t0 t1 t2 t3 additive increase additive increase additive increase multiplicative decrease multiplicative decrease ´ continuous state before the kth multiplicative decrease Theorem. The function T is a contraction. In particular, • Therefore • xk!x1 as k!1x1´ constant • x(t) !x1 (t) as t!1x1(t) ´ periodic limit cycle
NS-2 simulation results window size w1 window size w2 window size w3 window size w4 window size w5 window size w6 window size w7 window size w8 queue size q n1 flow 1 s1 n2 flow 2 s2 Bottleneck link TCP Sinks TCP Sources Router R2 Router R1 20Mbps/20ms flow 7 s7 n7 500 s8 n8 flow 8 400 Window and Queue Size (packets) 300 200 100 0 0 10 20 30 40 50 time (seconds)
Results t0 t1 t2 t3 additive increase additive increase additive increase multiplicative decrease multiplicative decrease Window synchronization: convergence is exponential, as fast as .5 k Steady-state formulas: average drop rate average RTT (well known TCP-friendly formula) average throughput
2nd pass – general topology server data in-noderate out-noderates acks out-queuerate in-queuerate queue size client acks & drops sendingrate server A communication network can be viewed as theinterconnection of several blocks with specific dynamics network dynamics (queuing & routing) a) Routing: b) Queuing: c) End2end cong. control congestion control
Routing f ’ n n n' determines the sequence of links followed by each flow Conservation of flows: end2end sending rate of flowf in-queue rate of flowf upstream out-queue rate of flow f indexes l and l’ determined by routing tables
Routing ’ ’ 1 n n n' n' ” 2 n'' determines the sequence of links followed by each flow Multicast Multi-path routing
Queue dynamics in-queue rates out-queue rates drop rates M M … link bandwidth queue size due to flowf total queue size the packets of each flow are assumed uniformly distributed in the queue Queue dynamics:
Queue dynamics in-queue rates out-queue rates drop rates M M … same in and out-queue rates queue empty no drops queue not empty/full queue full out-queue rates proportional to fraction of packets in the queue drops proportional to fraction in-queue rates
Drops events in-queue rates out-queue rates drop rates M M … When? t0 t1 t2 packet size total in-queue rate total out-queue rate (link bandwidth)
Drops events in-queue rates out-queue rates drop rates M M … When? t0 t1 t2 Which flows? (drop tail dropping) flow that suffers drop at timetk
Hybrid queue model -queue-not-full transition enabling condition discrete modes -queue-full exporteddiscrete event
Hybrid queue model -queue-not-full stochastic counter discrete modes Random Early Dropactive queuing -queue-full
Network dynamic & Congestion control routing in-queue rates sendingrates out-queuerates queue dynamics end2end congestion control TCP/UDP drops
Additive Increase/Multiplicative Decrease • While there are no drops, increase wf by 1 on each RTT (additive increase) • When a drop occurs, divide wf by 2 (multiplicative decrease) • (congestion controller constantly probe the network for more bandwidth) importeddiscrete event propagation delays congestion-avoidance set of links transversed by flow f TCP-Reno is based on AIMD but uses other discrete modes to improve performance
Slow start In the beginning, pure AIMD takes a long time to reach an adequate window size • Until a drop occurs (or a threshold ssthf is reached), double wf on each RTT • When a drop occurs, divide wf and the threshold ssthf by 2 slow-start cong.-avoid. especially important for short-lived flows…
Fast recovery After a drop is detected, new data should be sent while the dropped one is retransmitted • During retransmission, data is sent at a rate consistent with a window size of wf /2 slow-start cong.-avoid. fast-recovery (consistent with TCP-SACK for multiple consecutive drops)
Timeouts Typically, drops are detected because one acknowledgment in the sequence is missing. source destination 1st packet sent 2nd packet sent drop 3th packet sent 4th packet sent 2nd packet received & ack. sent 3th packet received & ack. sent 4th packet received & ack. sent three acks received out of order drop detected, 1st packet re-sent When the window size becomes smaller than 4, this mechanism fails and drops must be detected through acknowledgement timeout. • When a drop is detected through timeout: • the slow-start threshold ssthf is set equal to half the window size, • the window size is reduced to one, • the controller transitions to slow-start
Fast recovery, timeouts, drop-detection delay… TCP SACK version
Network dynamic & Congestion control routing in-queue rates RTTs sendingrates out-queuerates queue dynamics end2end congestion control drops see SIGMETRICS paper for on/off TCP & UDP model
Validation methodology • Compared simulation results from • ns-2 packet-level simulator • hybrid models implemented in Modelica • Plots in the following slides refer to two test topologies Y-topology dumbbell • 10ms propagation delay • drop-tail queuing • 5-500Mbps bottleneck throughput • 0-10% UDP on/off background traffic • 45,90,135,180ms propagation delays • drop-tail queuing • 5-500Mbps bottleneck throughput • 0-10% UDP on/off background traffic
Simulation traces cwnd of TCP 1 cwnd of TCP 1 queue size queue size 140 140 120 120 100 100 80 80 60 cwnd and queue size (packets) 60 40 cwnd and queue size (packets) 20 40 0 20 0 2 4 6 8 10 12 14 16 18 20 time (seconds) 0 0 2 4 6 8 10 12 14 16 18 20 time (seconds) • single TCP flow • 5Mbps bottleneck throughput • no background traffic ns-2 hybrid model slow-start, fast recovery, and congestion avoidance accurately captured
Simulation traces 140 120 cwnd size of TCP 1 cwnd size of TCP 1 100 cwnd size of TCP 2 cwnd size of TCP 2 cwnd size of TCP 3 cwnd size of TCP 3 80 cwnd size of TCP 4 cwnd size of TCP 4 Queue size of Q1 Queue size of Q1 60 cwnd and queue size (packets) Queue size of Q2 Queue size of Q2 40 20 0 0 2 4 6 8 10 12 14 16 18 20 time (seconds) • four competing TCP flow(starting at different times) • 5Mbps bottleneck throughput • no background traffic ns-2 hybrid model 140 120 100 80 60 cwnd and queue size (packets) 40 20 0 0 2 4 6 8 10 12 14 16 18 20 time (seconds) the hybrid model accurately captures flow synchronization
Simulation traces 140 140 120 120 100 100 80 80 60 cwnd and queue size (packets) 60 cwnd and queue size (packets) 40 40 20 0 20 0 2 4 6 8 10 12 14 16 18 20 time (seconds) 0 2 4 6 8 10 12 14 16 18 20 time (seconds) • four competing TCP flow(different propagation delays) • 5Mbps bottleneck throughput • 10% UDP background traffic(exp. distributed on-off times) ns-2 hybrid model CWND size of TCP 1 (Prop=0.045ms) CWND size of TCP 1 (Prop=0.045ms) CWND size of TCP 2 (Prop=0.090ms) CWND size of TCP 2 (Prop=0.090ms) CWND size of TCP 3 (Prop=0.135ms) CWND size of TCP 3 (Prop=0.135ms) CWND size of TCP 4 (Prop=0.180ms) CWND size of TCP 4 (Prop=0.180ms) Queue size of Q1 Queue size of Q1 Queue size of Q3 Queue size of Q3 0
Average throughput and RTTs • four competing TCP flow(different propagation delays) • 5Mbps bottleneck throughput • 10% UDP background traffic(exp. distributed on-off times) • 45,90,135,180ms propagation delays • drop-tail queuing • 5Mbps bottleneck throughput • 10% UDP on/off background traffic the hybrid model accurately captures TCP unfairness for different propagation delays
Empirical distributions CWND of TCP1 CWND of TCP1 CWND of TCP2 CWND of TCP2 CWND of TCP3 CWND of TCP3 CWND of TCP4 CWND of TCP4 Queue 3 Queue 3 0.15 0.18 hybrid model ns-2 0.16 0.14 0.12 0.1 0.1 probability probability 0.08 0.06 0.05 0.04 0.02 0 0 0 10 20 30 40 50 60 70 0 10 20 30 40 50 60 70 cwnd & queue size cwnd & queue size the hybrid model captures the whole distribution of congestion windows and queue size
Execution time 1 flow 3 flows 500Mbps ns-2 50Mbps hybrid model 5Mbps number of flows • ns-2 complexity approximately scales with • hybrid simulator complexity approximately scales with (# packets) per-flow throughput hybrid models are particularly suitable for large, high-bandwidth simulations (satellite, fiber optics, backbone)