1 / 57

Network Elements based on Partial State

Network Elements based on Partial State. A. L. Narasimha Reddy Dept. of Electrical Engineering Texas A & M University reddy@ee.tamu.edu http://ee.tamu.edu/~reddy/. Acknowledgements. Deying Tong (Cisco) Sai Gopalakrishnan (Cisco) Smitha (Intel) Phani Achanta (Graduating in Aug. 2002).

hoai
Download Presentation

Network Elements based on Partial State

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Elements based on Partial State A. L. Narasimha Reddy Dept. of Electrical Engineering Texas A & M University reddy@ee.tamu.edu http://ee.tamu.edu/~reddy/

  2. Acknowledgements • Deying Tong (Cisco) • Sai Gopalakrishnan (Cisco) • Smitha (Intel) • Phani Achanta (Graduating in Aug. 2002) Texas A & M University

  3. Introduction • Proposals for new network architectures • Full State (IntServ) • Difficult to scale per-flow state with # of flows • No State (DiffServ) • Flow isolation difficult Texas A & M University

  4. Introduction • What if we can build network elements with some fixed amount of state? • State is not enough for all the flows • What kind of services can we provide? • Hypothesis: Only few flows need state, most flows can be aggregated. Texas A & M University

  5. Motivation • Typical Internet traffic consists of • Many short-lived flows (“mice”) • pump below 20 packets (approximately 20KB) • Few large flows (“elephants”) • Current resource management techniques do not distinguish the flows • Dropping packets from short-lived flows may do little to ease congestion • Also, mice flows are latency sensitive Texas A & M University

  6. Motivation (contd..) • Congestion management “should” rely on controlling high bandwidth flows • Offer more control on traffic • Likely to be consuming disproportionate bandwidth • Likely to be “robust” (ftp for e.g..) • May need mechanisms to control unresponsive applications • To improve fairness and to prevent congestion collapse Texas A & M University

  7. Flow Classification • Long-lived flows • TCP flows (FTP Applications) • UDP flows (Video Applications) • Short-lived flows • Telnet, HTTP transfers • Responsive vs. Nonresponsive flows • ftp vs. some video transfers Texas A & M University

  8. Basis for Partial State • A Small fraction of flows contribute large fraction of bytes. • If state can be allocated to these flows, resource management can be done efficiently without requiring full state. Texas A & M University

  9. Basis for Partial State Texas A & M University

  10. Basis for Partial State Texas A & M University

  11. Basis for Partial State Texas A & M University

  12. Partial State Approach • Maintain Fixed amount of Partial State • State is not dependent on number of flows • State depends on engineering concerns • Manage the state information to retain history of high-BW flows -- How? • Adopt appropriate resource management based on the goals Texas A & M University

  13. Partial State Approach • Similar to how caches are employed in computer memory systems • Exploit locality • Employ an engineering solution in an architecture-transparent fashion Texas A & M University

  14. State Management • Sampling is employed as a basic tool • High-BW flows more likely to be selected • State organized as a Cache • Caches allow quick identification if flow is allocated state • State Allocation can be • Policy Driven • Traffic Driven Texas A & M University

  15. Policy Driven State Management • An ISP could decide to monitor flows above 1Mbps • Will need state >= link capacity/1 Mbps • Could monitor flows consuming more than 1% of link capacity • For security reasons • At most 100 flows with 1% BW consumption Texas A & M University

  16. Traffic Driven State Management • Monitor top 100 flows at any time • Don’t know the identity of these flows • Don’t know how much BW these may consume • Employ LRU Cache management • Flows have to arrive at cache frequently to stay in cache • Maintains High-BW flows in a self-organizing way Texas A & M University

  17. Traffic Driven State Management (contd…) • Flows probabilistically admitted into cache, ‘p’. • Reduces the chance of short-term flows disturbing the cache state. • Keep count of packet arrivals of cached flows • Declare a “high-BW” flow if count > Threshold Texas A & M University

  18. The Algorithm New Packet Yes In Cache? Update position and count No Yes Cache size < ‘S’ Make a new entry, count=1 No Admit the flow into the cache with a probability ‘p’, count = 1 Texas A & M University

  19. Why an LRU Cache? • High bandwidth flows arrive often • Stay in the cache for longer periods • Smooth flows stay in the cache longer compared to bursty flows • UDP flows (smooth) • TCP flows (bursty) • Responsive flows reduce rate and get replaced • Nonresponsive flows remain in cache Texas A & M University

  20. UDP Cache Occupancy Texas A & M University

  21. TCP Cache Occupancy Texas A & M University

  22. Resource Management • Cached flows can be treated individually • Noncached flows treated in an aggregate manner • With larger state, finer control on traffic Texas A & M University

  23. Resource Management • Preferential Dropping (RED based) • Drop cached flows more often • Use Packet count as a scaling function • Fair queuing • Cached flows, noncached flows in separate queues, employ WFQ • Possible to protect noncached flows from cached flows Texas A & M University

  24. Resource Management Texas A & M University

  25. Preferential Dropping 1 drop prob maxp minth maxth Queue length drop prob for high bandwidth flows drop prob for other flows Texas A & M University

  26. Preferential Dropping (contd..) • As congestion builds up, above min_th, • if (flow->count >=‘threshold’) • Pdrop = pred * flow->count / ‘threshold’ • else • Pdrop = pred • High-BW nonresponsive flows get higher drops • Low-BW and responsive flows see lower drops Texas A & M University

  27. Two Studies • LRU-RED: Simulation based study • Provide lower drop rates for responsive and short-term flows • Approximately fair BW distribution • LRU-FQ: Linux-based partial state router prototype • Contain DOS attacks • Provide shorter delays for short-term flows Texas A & M University

  28. LRU-RED Simulations 20Mb 20Mb 40Mb R1 R2 Texas A & M University

  29. Topology 2 20Mb 20Mb 40Mb 30Mb R1 R2 R3 Texas A & M University

  30. LRU-RED Results Texas A & M University

  31. LRU-RED Results Texas A & M University

  32. LRU-RED Results Texas A & M University

  33. LRU-RED Results Texas A & M University

  34. Varying Load Texas A & M University

  35. RTT Bias -TCP flows Texas A & M University

  36. Summary of LRU-RED • LRU cache is effective in identifying high bandwidth, nonresponsive flows • Combined the above with RED to propose a novel active queue management scheme • Simulation results show the effectiveness of the scheme • Sampling can further reduce the per-packet cost Texas A & M University

  37. LRU-FQ Resource Management Texas A & M University

  38. LRU-FQ Flow Chart – Enque Does Cache Have space? Is Flow in Cache? No No Admit flow with Probability ‘p’ Packet Arrival Yes Yes Is Flow Admitted? Record flow details Initialize ‘count’ to 0 Yes Increment ‘count’ Move flow to top of cache No Is ‘count’ >= ‘threshold’ No Yes Enqueue in Responsive Queue Enqueue in Non-responsive Queue Texas A & M University

  39. LRU-FQ – Dequeue event • Dequeue event results in selection of a packet from either queues based on the Fair Queue algorithm used. • The weight assigned to the individual queues determine the proportion of bandwidth they are assigned. Texas A & M University

  40. Implementation IssuesonLinux

  41. Linux IP Packet Forwarding Local packet Deliver to upper layers UPPER LAYERS Route to destination Update Packet Error checking Verify Destination IP LAYER Packet Enqueued Scheduler invokes Bottom half Design space Scheduler runs Device driver LINK LAYER Request Scheduler To invoke bottom half Device Prepares packet Packet Departure Packet Arrival Check & Store Packet Enqueue pkt Texas A & M University

  42. Linux Kernel traffic control • Filters are used to distinguish between different classes of flows. • Each class of flows can be further categorized into sub-classes using filters. • Queuing disciplines control how the packets are enqueued and dequeued Texas A & M University

  43. LRU-FQ Implementation • LRU component of the scheme is implemented as a filter. • All parameters: threshold, probability and cache size are passed as parameters to the filter • Fair Queuing employed as a queuing discipline. • Scheduling based on queue’s weight. • Start-time Fair Queuing Texas A & M University

  44. LRU-FQ - Results

  45. Timing Results Texas A & M University

  46. Long-Term flow differentiation Normal TCP fraction = 0.07 Probability = 1/25 Cache size= 11 threshold= 125 Texas A & M University

  47. Long-term flow differentiation Probability = 1/25 Cache size= 11 threshold= 125 Texas A & M University

  48. Protecting Web Mice Texas A & M University

  49. Long Term TCP Flows 20 LongTerm UDP Flows 2 – 4 Web Clients 20 Probability 1/50 Threshold 125 LRU Cache Size 11 LRU : Normal Queue 1:1 Protecting Web mice Experimental Setup Texas A & M University

  50. UDP Flows UDP Flows UDP Tput UDP Tput # Web Requests # Web Requests TCP Tput TCP Tput TCP Fraction TCP Fraction 2 2 47.392 89.187 960 3131 46.884 6.285 0.0658 0.4973 3 3 45.871 89.000 2956 1098 6.478 47.863 0.0678 0.5106 4 4 47.547 89.265 1007 2965 6.268 46.677 0.4953 0.0656 Protecting Web Mice Bandwidth Results Normal Router LRU-FQ Router Texas A & M University

More Related