1 / 61

Internet measurements: fault detection, identification, and topology discovery

Internet measurements: fault detection, identification, and topology discovery. Renata Teixeira Laboratoire LIP6 CNRS and UPMC Paris Universitas. Internet monitoring is essential. For network operators Monitor service-level agreements Fault diagnosis Diagnose anomalous behavior

lethia
Download Presentation

Internet measurements: fault detection, identification, and topology discovery

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Internet measurements: fault detection, identification, and topology discovery • Renata Teixeira • Laboratoire LIP6 • CNRS and UPMC Paris Universitas

  2. Internet monitoring is essential • For network operators • Monitor service-level agreements • Fault diagnosis • Diagnose anomalous behavior • For users or content/application providers • Verify network performance • Verify network neutrality

  3. Network operators can’t know the user’s experience • Network operators only have data of one AS • AS4 doesn’t detect any problem • AS3 doesn’t know who is affected by the failure AS3 AS2 AS4 AS1

  4. End users can’t know what happens in the network • End-hosts can only monitor end-to-end paths AS3 AS2 AS4 AS1

  5. Network tomography to rescue Inference of unknown network properties from measurable ones End users Cooperative monitoring Among end users From users to popular services • Network operators • Monitor network paths • From monitoring hosts • In network • Third-party monitoring services • From home gateways http://www.nanodatacenters.eu http://cmon.grenouille.com

  6. Fault diagnosis using end-to-end measurements • Faults are persistent reachability problems detection continuous path monitoring identification binary tomography

  7. Outline • Background in network tomography • Fault detection • Active vs. passive measurements • Reducing overhead of active measurements • Disambiguating one-way failures • Fault identification using binary tomography • Correlated path reachability • Topology discovery • Open issues

  8. Network tomography to infer link performance • What are the properties of network links? • Loss rate • Delay • Bandwidth • Connectivity • Given end-to-end measurements • No access to routers F D AS 2 E AS 1 C A B

  9. The origins • MINC: Multicast-based Inference of Network-internal Characteristics • Key idea: multicast probes • Exploit correlation in traces to estimate link properties probe sender probe collectors [MINC project, 1999]

  10. Inferring link loss rates m • Assumptions • Known, logical-tree topology • Losses are independent • Multicast probes • Method • Maximum likelihood estimates for αk success probabilities α1 α2 α3 t1 t2 1 1 0 1 1 1 ^ estimated success probabilities α3 ^ ^ α1 α2 [Adams, 2000]

  11. Binary tomography m • Labels links as good or bad • Loss-rate estimation requires tight correlation • Instead, separate good/bad performance • If link is bad, all paths that cross the link are bad α1 α2 α3 t1 t2 1 1 0 1 0 1 bad good [Duffield, 2006]

  12. Single-source tree m • “Smallest Consistent Failure Set” algorithm • Assumes a single-source tree and known topology • Find the smallest set of links that explains bad paths • Given bad links are uncommon • Bad link is the root of maximal bad subtree bad t1 t2 1 1 0 1 0 1 bad good [Duffield, 2006]

  13. Fault identification with binary tomography • Fault monitoring needs multiple sources and targets • Problem becomes NP-hard • Minimum hitting set problem • Iterative greedy heuristic • Given the set of links in bad paths • Iteratively choose link that explains the max number of bad paths m1 m2 t1 t2 Hitting set of link = paths that traverse the link [Kompella, 2007] [Dhamdhere, 2007]

  14. Practical issues • Topology is often unknown • Need to measure accurate topology • Multicast not available • Need to extract correlation from unicast probes • Even using probes from different monitors • Control of targets is not always practical • Need one-way performance from round-trip probes • Links can fail for some paths, but not all • Need to extend tomography algorithms

  15. Outline • Background in network tomography • Fault detection with no control of targets • Active vs. passive measurements • Reducing overhead of active measurements • Disambiguating one-way failures • Fault identification using binary tomography • Correlated path reachability without multicast • Topology discovery • Open issues

  16. Detection techniques • Active probing: ping • Send probe, collect response • From any end host • Works for network operators and end users • Passive analysis of user’s traffic • Tap incoming and outgoing traffic • At user’s machines or servers: tcpdump, pcap • Inside the network: DAG card • Monitor status of TCP connections

  17. Detection with ping probe ICMP echo request t • If receives reply • Then, path is good • If no reply before timeout • Then, path is bad reply ICMP echo reply m

  18. Persistent failure or measurement noise? • Many reasons to lose probe or reply • Timeout may be too short • Rate limiting at routers • Some end-hosts don’t respond to ICMP request • Transient congestion • Routing change • Need to confirm that failure is persistent • Otherwise, may trigger false alarms

  19. Failure confirmation • Upon detection of a failure, trigger extra probes • Goal: minimize detection errors • Sending more probes • Waiting longer between probes • Tradeoff: detection error and detection time loss burst packets on a path time Detection error [Cunha, 2009]

  20. Passive detection at end hosts • tcpdump/pcap captures packets • Track status of each TCP connection • RTTs, timeouts, retransmissions • Multiple timeouts indicate path is bad • If current seq. number > last seq. number seen • Path is good • If current seq. number = last seq. number seen • Timeout has occurred • After four timeouts, declare path as bad [Zhang, 2004]

  21. Passive detection inside the network is hard • Traffic volume is too high • Need special hardware • DAG cards can capture packets at high speeds • May lose packets • Tracking TCP connections is hard • May not capture both sides of a connection • Large processing and memory overhead

  22. Passive vs. active detection Passive Active No need to tap user’s traffic Detects failures in any desired path • No need to inject traffic • Detects all failures that affect user’s traffic • Responses from targets that don’t respond to ping • Not always possible to tap user’s traffic • Only detects failures in paths with traffic • Probing overhead • Cover a large number of paths • Detect failures fast

  23. Outline • Background in network tomography • Fault detection with no control of targets • Active vs. passive measurements • Reducing overhead of active measurements • Disambiguating one-way failures • Fault identification using binary tomography • Correlated path reachability without multicast • Topology discovery • Open issues

  24. Active monitoring: reducing probing overhead T1 T2 target hosts target network M1 C A D T3 B Goal detect failures of any of the interfaces in the target network with minimum probing overhead monitors M2

  25. Instead of probing all paths, select the minimum set of paths that covers all interfaces in target network Coverage problem is NP-hard Solution: greedy set-cover heuristic The coverage solution T1 T2 M1 C A D T3 B M2 [Nguyen, 2004] [Bejerano,2003]

  26. Coverage solution doesn’t detect all types of failures • Detects fail-stop failures • Failures that affect all packets that traverse the faulty interface • Eg., interface or router crashes, fiber cuts, bugs • But not path-specific failures • Failures that affect only a subset of paths that cross the faulty interface • Eg., router misconfigurations [Nguyen, 2009]

  27. New formulation of failure detection problem • Select the frequency to probe each path • Lower frequency per-path probing can achieve a high frequency probing of each interface T1 T2 1 every 9 mins M1 C A D T3 B 1 every 3 mins M2 [Nguyen, 2009]

  28. Outline • Background in network tomography • Fault detection with no control of targets • Active vs. passive measurements • Reducing overhead of active measurements • Disambiguating one-way failures • Fault identification using binary tomography • Correlated path reachability without multicast • Topology discovery • Open issues

  29. Is failure in forward or reverse path? t • Paths can be asymmetric • Load balancing • Hot-potato routing probe reply m

  30. Disambiguating one-way losses: Spoofing t • Monitor requests to spoofer to send probe • Spoofer sends spoofed probe with source address of the monitor • If reply reaches the monitor, reverse path is good Spoofer m [Katz-Bassett, 2008]

  31. Limits of spoofing • Network operators often drop spoofed packets • Spoofed packets are normally used for attacks t • Placement of spoofer • Paths from spoofer to targets need to be independent than paths from monitors m

  32. Summary: Fault detection • End users: passive plus active probing • Passive measurements capture user’s experience • Active probes • When path has no traffic • When TCP connections are too short • Network operators: alarms plus active probing • Alarm systems directly report many faults • Active monitoring to capture customer’s experience • Detect blackholes (i.e., faults that don’t appear in alarms) • Detect faults in other networks

  33. Outline • Background in network tomography • Fault detection with no control of targets • Active vs. passive measurements • Reducing overhead of active measurements • Disambiguating one-way failures • Fault identification • Correlated path reachability without multicast • Topology discovery • Open issues

  34. Uncorrelated measurements lead to errors • Lack of synchronization leads to inconsistencies • Probes cross links at different times • Path may change between probes m t1 t2 mistakenly inferred failure

  35. Sources of inconsistencies • In measurements from a single monitor • Probing all targets can take time • In measurements from multiple monitors • Hard to synchronize monitors for all probes to reach a link at the same time • Impossible to generalize to all links

  36. Inconsistent measurements with multiple monitors mK … path reachability m1 m1,t1 good good mK,t1 … … … … m1, tN good bad mK, tN inconsistent measurements … tN t1

  37. Consistency has a cost Delays fault identification Cannot identify short failures Solution: Reprobe paths after failure mK … path reachability m1 m1,t1 good good mK,t1 … … … … m1, tN bad bad mK, tN … tN t1 [Cunha, 2009]

  38. Summary: Correlated measurements • Trade-off: consistency vs. identification speed • Faster identification leads to false alarms • Slower identification misses short failures • Network operators • Too many false alarms are unmanageable • Longer failures are the ones that need intervention • End users • Even short failures affect performance

  39. Outline • Background in network tomography • Fault detection with no control of targets • Active vs. passive measurements • Reducing overhead of active measurements • Disambiguating one-way failures • Fault identification • Correlated path reachability without multicast • Topology discovery • Open issues

  40. Measuring router topology • With access to routers (or “from inside”) • Topology of one network • Routing monitors (OSPF or IS-IS) • No access to routers (or “from outside”) • Multi-AS topology or from end-hosts • Monitors issue active probes: traceroute

  41. Topology from inside • Routing protocols flood state of each link • Periodically refresh link state • Report any changes: link down, up, cost change • Monitor listens to link-state messages • Acts as a regular router • AT&T’s OSPFmon or Sprint’s PyRT for IS-IS • Combining link states gives the topology • Easy to maintain, messages report any changes [Mortier] [Shaikh, 2004]

  42. Inferring a path from outside: traceroute Actual path TTL exceeded from B.1 TTL exceeded from A.1 A.1 A.2 B.1 B.2 A B m t TTL = 1 TTL = 2 Inferred path A.1 B.1 m t

  43. A traceroute path can be incomplete • Load balancing is widely used • Traceroute only probes one path • Sometimes taceroute has no answer (stars) • ICMP rate limiting • Anonymous routers • Tunnelling (e.g., MPLS) may hide routers • Routers inside the tunnel may not decrement TTL

  44. Traceroute under load balancing Actual path A C TTL = 2 E t L m B D TTL = 3 Missing nodes and links Inferred path A C False link E L t m B D [Augustin, 2006]

  45. Errors happen even under per-flow load balancing • Traceroute uses the destination port as identifier • Needs to match probe to response • Response only has the header of the issued probe Flow 1 A C TTL = 2 Port 2 E L t m B D TTL = 3 Port 3 [Augustin, 2006]

  46. Paris traceroute • Solves the problem with per-flow load balancing • Probes to a destination belong to same flow • Changes the location of the probe identifier • Use the UDP checksum A C TTL = 3 Port 1 TTL = 2 Port 1 E t L m Checksum 3 Checksum 2 B D [Augustin, 2006]

  47. Topology from traceroutes Actual topology Inferred topology D t1 • Inferred nodes = interfaces, not routers • Coverage depends on monitors and targets • Misses links and routers • Some links and routers appear multiple times 2 3 1 t1 2 D.1 1 1 A C 4 C.1 A.1 m1 m1 3 2 C.2 t2 B 2 1 t2 3 B.3 m2 m2

  48. Alias resolution: Map interfaces to routers • Direct probing • Probe an interface, may receive response from another • Responses from the same router will have close IP identifiers and same TTL • Record-route IP option • Records up to nine IP addresses of routers in the path Inferred topology t1 D.1 C.1 A.1 m1 C.2 t2 B.3 m2 same router [Spring, 2002] [Sherwood, 2008]

  49. Large-scale topology measurements • Probing a large topology takes time • E.g., probing 1200 targets from PlanetLab nodes takes 5 minutes on average (using 30 threads) • Probing more targets covers more links • But, getting a topology snapshot takes longer • Snapshot may be inaccurate • Paths may change during snapshot • Hard to get up-to-date topology • To know that a path changed, need to re-probe

  50. Faster topology snapshots • Probing redundancy • Intra-monitor • Inter-monitor • Doubletree • Combines backward and forward probing to eliminate redundancy D t1 A C m1 t2 B m2 [Donnet, 2005]

More Related