550 likes | 629 Views
Measuring & Managing Distributed Networked Systems CSE Science of Complex Systems Seminar December 6, 2006. Chen-Nee Chuah Robust & Ubiquitous Networking (RUBINET) Lab http://www.ece.ucdavis.edu/rubinet Electrical & Computer Engineering. Networked Computers: Trends. Hand-held devices
E N D
Measuring & Managing Distributed Networked SystemsCSE Science of Complex Systems SeminarDecember 6, 2006 Chen-Nee Chuah Robust & Ubiquitous Networking (RUBINET) Lab http://www.ece.ucdavis.edu/rubinet Electrical & Computer Engineering
Networked Computers: Trends Hand-held devices (PDAs, etc) Intelligent transportation system Wireless Sensors Smart home appliances Laptops Super computer Rover Mars Vehicle • Different capabilities/constraints • Different requirements • Explosive growth in numbers We know how individual component/layer behaves, but when they are inter-connected and start interacting with each other, we need to cope with the unexpected!
What do networking researchers/engineers care about when they design networks/protocols? • End-to-end behavior • Reachability • Performance in terms of delay, losses, throughput • Security • Stability/fault-resilience of the end-to-end path • System-wide behavior • Scalability • Balanced load distribution within a domain • Stability/Robustness/Survivability • Manageability • Evolvability and other X-ities • J. Kurose, INFOCOM’04 Keynote Speech
How do we know when we get there? • We know how to do the following fairly well: • Prove correctness/completeness of stand-alone system or protocol • E.g., algorithm complexity, convergence behavior • Look at steady state, worst-case, and average scenario • E.g., Queuing models • Run simulations/experiments to show improvement of protocol/architecture Z over A, B, C, D …. • What is lacking: • End-to-end Validation of the design solution or system behavior • Is the system behavior what we really intended? • How do we verify what type of behaviors/properties are ‘correct’ and what are ‘abnormal’? • Verification of the system ‘dynamics’, e.g., how different components or network layers interact
Challenges Logical topology Traffic Demand BGP Policies Physical topology • End-to-end system behavior depends on: Routing protocols NAT boxes, firewalls, packet filters, packet transformers • Messy dependency graphs => A lot to model if we truly want to understand and able to validate system behavior
Research Philosophy What’s lacking? • A systematic approach to measure and validate end-to-end behavior & verify system dynamics Our Approach • Combination of measurement, modeling, inferencing, and experimental techniques
Talk Outline Two sample problem areas: • Characterizing ‘Service Availability’ of IP networks • Measuring & understanding Internet routing dynamics • Availability as a metric for differentiating topologies • Example application: graceful network upgrade • Modeling and Validating Firewall Configurations • Validating end-to-end reachability • Static analysis on firewall rules
Overview • What does the current Internet look like? • Topologies • Inter-domain graphs look like power-law • Intra-domain • Star/hub, mesh • North America: backbone links along train tracks • Routing hierarchy: inter-domain vs. intra-domain routing • Distributed management • Different tiers of ASes (Customers vs. Tier-1/2/3 ISPs) • Multiple players • ISPs, content providers, application providers, customers
Service Availability • 99.999 equivalent of PSTN • Service Level Agreements offered by ISP (a single AS) • Performance in terms of average delay and packet loss • 0.3% loss, speed-of-light end-to-end latencies • Port availability • Time to fix a reported problem • What do Tier-1 ISPs do to meet SLAs? • Over-provisioning • Load balancing on per-prefix or per-packet basis • End-to-end availability (spanning multiple ASes)? • Does not exist! • ISPs do not share proprietary topology and operational info with one another
Service Availability: Challenges • Typical Service Level Agreements (SLA) or static graph properties (like out-degree, network diameter) do not account for instantaneous network operational conditions • Important factor for designing & evaluating • Failure protection/restoration mechanisms, • Traffic engineering, e.g., IGP weight selection, capacity provisioning • Network design e.g., topology, link/node upgrade • Given different choices of network topologies, how do we know if which one offer good service availability? • Router configuration, e.g., tuning protocol timers • So, what do we need? • Understanding Internet routing failures would be a good first step!
Integrated Monitoring POP-C Cust POP-B A POP- Cust • Where? • Tier-1 ISP backbone (600+ nodes), enterprise/campus networks • What? • BGP/IGP passive route listeners, SONET alarm logs • IPMON/CMON passive traffic monitoring & active probes • Controlled failure experiments • Router configurations and BGP policies Point-of-Presence (PoP)
Questions we want to answer • IGP/BGP routing behavior during sunny and rainy days • How frequent do failures occur? • How do they behave during convergence or routing instabilities • What are the causes? • Are there anomalies? • How do failures/instability/anomalies propagate across networks? • How does the control plane impact the data forwarding? • What actually happens to the packets? What is the effect on end-to-end service availability? • How do network components/protocols interact? • Unexpected coupling, race-conditions, conflicts?
Lessons Learned from Phase 1 (1) Nov Apr May Jun Jul Aug Sep Oct Day • Transient, individual link failures are the norm • No, failures are not independent & uniformly distributed [INFOCOM’04]
Backbone Failure Characteristics • How often? (time between failures) • Weibull distribution: • How are failures spread across links? • Power law: (link with rank l has nl failures)
Revisit Service Availability • So, we now have the failure model…. but that’s only one piece of the puzzle • Failure recovery is *NOT* instantaneous => forwarding can be disrupted during route re-convergence • Overload/Congestion on backup paths
Service Convergence After Failures 1. Delete IS-IS adj 2.Generate LSP 2. LSP Flooding 3. SPF & update RIB 1b. ISIS Notification 1a. Failure Detection • Protocol convergence = 1+2+3, but data forwarding only resumes after step 4. - FIB updates depend on number of routing prefixes • Invalid routes during route convergence (2-8 seconds)=> complete black-hole of traffic => packet drops! Client 4.Update FIB on linecards
Service Convergence Delay Detection of link down (SONET) <100ms Timerto filter out transient flaps 2s Timer before sending any message out 50ms Flooding of the message in the network ~10ms/hopO(network distance) Timerbefore computing the new shortest paths 5.5s Computation of the new shortest paths 100-400 msO(network size) Protocol-level convergence: Worst case: 5.9s Update of the Forwarding Information Base 1.5-2.1 s O(network size + BGP peering points) Service-level convergence: Worst case: 8.0s Service availability depends ontopology, failure model, router configurations, router architecture, location of peering points/customers[IWQoS’04]
Predicting Service Availability • Network convergence time is only a rough upper-bound on actual service disruption time (SD Time) • Service availability • Three metrics: service disruption time (SD), traffic disruption (TD), and delay violation (DV) • Three perspectives: ingress node, link, and network-wide • Develop a java-based tool to estimate SA • Given topology, link weights, BGP peering points, and traffic demand • Goodness factors derived from SA can be used to characterize different networks [IWQoS’04]
Simulation Studies • Two sets of topologies • Set A: Ring, Mesh, 2 Tier 1 ISPs • Set B: 10 similar topologies • Realistic Traffic Matrix • Based on data from tier 1 ISPs – 20% large nodes, 30% medium nodes and 50% small nodes • Prefix Distribution • Proportional to traffic • Equal link metric assignment
Example Application 1: Network Design: Choice of Topologies Mesh ISPs ISPs Ring Ring CDF of Number of Delay-Parameter Violations due to single link failures CDF of Traffic Disruption due to single link failures CDF CDF Can differentiate topologies based on SA Metrics Traffic Disruption Delay Parameter Violations
Example Application 2: Identifying links for upgrade Topology-6 Badness Factor ONLY SHOW BOTTOM GRAPH Link Number
Example Application 3: Graceful Network Upgrade • Tier-1 ISP backbone networks • Nodes Points of Presence (PoPs); Links Inter-POP links • “Graceful” Network Upgrade • Adding new nodes and links into an existing operational network result in => new topology => new operational conditions => impact of link failures could be different • Impact on service availability of existing customers should be minimized during upgrade process • Assumptions: • ISPs can reliably determine the future traffic demands when new nodes are added • ISPs can reliably estimate the revenue associated with the addition of every new node • Revenues resulting from new node are independent of each other
Graceful Network Upgrade: Problem Statement • Given a budget constraint: • Find where to add new nodes and how they should be connected to the rest of the network • Determine an optimal sequence for adding these new nodes/links to minimize impact on existing customers • Two-Phase Framework* • Phase-1: Finding Optimal End State • Formulated as an optimization problem • Phase-2: Multistage Node Addition Strategy • Formulated as a multistage dynamic programming (DP) problem *Joint work with R. Keralapura and Prof. Y. Fan
Phase-1: Finding Optimal End State Revenue • Objective Function: Subject to: • Budget constraint: • Node degree constraint Link installation cost Node installation cost Budget
Phase-1: Finding Optimal End State (Cont’d) • SD Time constraint • TD constraint • Link utilization constraint • Can be solved by non-linear programming techniques – tabu search, genetic algorithms
Phase-2: Multistage Node Addition • Assumptions: • Already have the solution from Phase-1 • ISP wants to add one node and its associated links into the network at every stage • Hard to debug if all nodes are added simultaneously • Node/link construction delays • Sequential budget allocation • … • Time period between every stage could range from several months to few years
Phase-2: Multistage Node Addition Cost instead of constraints to ensure we find feasible solutions • Multistage dynamic programming problem with costs for performance degradation • Maximize (revenue – cost) • Cost considered: • Node degree • Link utilization • SD Time • TD Time
Numerical Example Potential Node Locations Original Nodes 1 Chicago Seattle C New York D A San Jose B 2 3 Phoenix Atlanta Dallas 4 Miami
Numerical Example – Phase-1 Solution New Links New Nodes Original Nodes Original Links 1 Chicago Seattle C New York D A San Jose B 2 3 Phoenix Atlanta Dallas 4 Miami Solution from Phase-1
Numerical Example – Phase-2 Solution New Links New Nodes Original Nodes Original Links 1 Chicago Seattle C New York D A San Jose B 2 3 Phoenix Atlanta Dallas 4 Miami Solution from Phase-1
Numerical Example – Phase-2 Solution • Ignoring SD time and TD in Phase-2 results in significant performance deterioration for customers
Remarks • The two-phase framework is flexible and can be easily adapted to include other constraints or scenario-specific requirements • Revenue and traffic demands in the network after node additions could dynamically change because of endogenous and exogenous effects • Adaptive and stochastic dynamic programming • Revenue generated by adding one node is usually not independent of other node additions • Uncertainty in predicting traffic demands • Applications of this framework in different environments • Wireless networks (WiFi, WiMax, celllular)
Talk Outline Two sample problem areas: • Characterizing ‘Service Availability’ of IP networks • Measuring & understanding Internet routing dynamics • Availability as a metric for differentiating topologies • Example application: graceful network upgrade • Modeling and Validating Firewall Configurations • Validating end-to-end reachability • Static analysis on firewall rules
End-to-End Reachability/Security • When user A sends a packet from a source node S to a destination node D in the Internet • How do we verify there is indeed a route that exist between S and D? • How do we verify that the packet follow a certain path that adheres to inter-domain peering relationships? • How do we verify that only this end-to-end connection satisfy some higher-level security policy? • E.g. Only user A can reach D and other users are blocked? • Answer depends on: • Router configurations & BGP policies • Packet filters along the way: Firewalls, NAT boxes, etc.
Firewalls • Popular network perimeter defense • Packet inspection and filtering • Why are protected networks stillobserving malicious packets ? • Firewalls can improve security only if they are configured CORRECTLY • But: a study of 37 firewall rule sets [Wool’04] shows • One had a single misconfiguration • All the others had multiple errors
Firewall Configurations • Platform-specific, low-level languages • Cisco PIX, Linux IPTables, BSD PF … • Large and complex • Hundreds of rules • Distributed deployment • Consistent w/ global security policy?
Distributed Firewalls ISP B ISP A Example: Network of Firewalls
Access Control List (ACL) • Core of firewall configuration • Consists of a list of rules • Individual rules -> <P, action> • P: A predicate matching packets • 5-tuple <proto, sip, spt, dip, dpt>
IPtable / Netfilter Model- Modeled as function calls IPX Model- Multiple access list in sequential order
Types of Misconfigurations • Type 1: Policy Violations • Configurations that contradict a high/network level policy, e.g., • Block NetBIOS and Remote Procedure Call • Bogons : unallocated/reserved IP • Type 2: Inconsistencies • Intra-firewall • Inter-firewall • Cross-path • Type 3: Inefficiencies • Redundant rules
Intra-Firewall Inconsistencies Shadowing: all packets matched by the current rule have been assigned a different action by preceding rules IP Space 3 1 2
Cross-Path Inconsistencies Internet X W DMZ Internal Y Z • Loophole depends on routing • Or “intermittent connectivity” • Troubleshooting nightmare
Validating End-to-End Reachability/Security • How do we verify configuration of firewall rules? • Borrow model checking techniques from software programming • Static Analysis Algorithm* • Control Flow Analysis • Complex chain -> simple list • Network topology -> ACL Graph • Checks at three levels: • Individual firewalls • Inter-firewall (see paper for details) • Root of ACL graph *Joint work with Prof. Z. Su and Prof. H. Chen
State Transformation start accept accept deny deny X1 • A/D – packets accepted/dropped so far • At the end of ACL, we have Aacl,Dacl R A X2 X1 D X2 Yn Xj Y1 Xk
Checks for Individual Firewalls P accept Initial State R P Current rule to check A D
Checks for Individual Firewalls accept P R P A D
Checks for Individual Firewalls P accept R P A D
Symbolic Technique – BDD • Our algorithm need: • Keep track of where does all the packets go • Efficient representation of random set of IP packets • Fast set operation • Binary Decision Diagram • Compact representation of ordered decision tree • Efficient apply algorithm, scalable (2^100) • Encode bit-vectors (packet header) • Set operations -> BDD operations
Evaluation • Takes about 1 second • Initial allocation of 400KB(10000 BDD nodes) is enough