1 / 31

MOJO: A Distributed Physical Layer Anomaly Detection System for 802.11 WLANs

MOJO: A Distributed Physical Layer Anomaly Detection System for 802.11 WLANs. Richard D. Gopaul CSCI 388. Authors. Anmol Sheth Christian Doerr Dirk Grunwald Richard Han Douglas Sicker. Department of Computer Science University of Colorado at Boulder Boulder, CO, 80309. Problem.

myles-evans
Download Presentation

MOJO: A Distributed Physical Layer Anomaly Detection System for 802.11 WLANs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MOJO: A Distributed Physical Layer Anomaly Detection System for 802.11 WLANs Richard D. Gopaul CSCI 388

  2. Authors • Anmol Sheth • Christian Doerr • Dirk Grunwald • Richard Han • Douglas Sicker Department of Computer Science University of Colorado at Boulder Boulder, CO, 80309

  3. Problem • Existing 802.11 deployments provide unpredictable performance • 802.11 Wireless Networks • Cheap • Easy to deploy • Two Classes • Planned deployments (large companies) • Small scale chaotic deployments (home users)

  4. Reasons for Unpredictable Performance • Noise and Interference • Co-channel interference, Bluetooth, Microwave Oven, … • Hidden Terminals • Node location, Heterogeneous Transmit Powers • Capture Effects • Simultaneous transmission • MAC Layer limitations • Timers, Rate adaptation, … • Heterogeneous Receiver Sensitivities

  5. Problems With Existing Solutions • Wireless networks encounter time-varying conditions • A single site survey is not enough • Cannot distinguish or determine root cause of problem • Existing tools for diagnosing WLANs only look at MAC layer and up • Aggregate effects of multiple PHY layer anomalies • Results in misdiagnosis, suboptimal solution

  6. How Faults Propagate in the Network Stack

  7. How Faults Propagate in the Network Stack

  8. Contributions of this paper: • Attempts to build a unified framework for detecting underlying physical layer anomalies • Quantifies the effects of different faults on a real network • Builds statistical detection algorithms for each physical effect and evaluates algorithm effectiveness in a real network testbed

  9. System Architecture • Provide visibility into PHY layer • Faults observed by multiple sensors • Based on an iterative design process • Artificially replicated faults in a testbed • Measured impact of fault at each layer of network stack

  10. MOJO • Distributed Physical Layer Anomaly Detection System for 802.11 WLANs • Design Goals: • Flexible sniffer deployment • Inexpensive, $ + Comms. • Accurate in diagnosing PHY layer root causes • Implements efficient remedies • Near-real-time

  11. Initial Design • Main components: • Wireless sniffers • Data collection mechanism • Inference engine • Diagnose problems, Suggest remedies • Data collection and inference engine initially centralized at a single server

  12. Operation Overview • Wireless sniffers sense PHY layer • Network interference, signal strength variations, concurrent transmissions • Modified Atheros based Madwifi driver run on client nodes • Periodically transmit a summary to centralized inference engine. • Inference engine collects information from the sniffers and runs detection algorithms.

  13. Sniffer Placement • Sniffer placement key to monitoring and detection • Sniffer locations may need to change as clients move over time • Cannot assume fixed locations, suboptimal monitoring • Multiple sniffers, merged sniffer traces necessary to account for missed data

  14. Prototype Implementation • Uses two wireless interfaces on each client • One for data, the other for monitoring • Second radio receives every frame transmitted by the primary radio • Avg. sniffer payload of 768 bytes/packet • 1.3KB of data every 10 sec. • < 200 bytes/sec.

  15. Detection of Noise • Caused by interfering wireless nodes or non-802.11 devices such as microwave ovens, Bluetooth, cordless phones, … • Signal generator used to emulate noise source • Node A connected to access point and signal generator using RF splitter Node A

  16. Detection of Noise • Power of signal generator increased from -90 dBm to -50 dBm • Packet payload increased from 256 bytes to 1024 bytes in 256 byte steps • 1000 frames transmitted for each power and payload size setting

  17. RTT vs. Signal Power • RTT stable until -65 dBm • Beyond -50 dBm 100% packet loss

  18. % Data Frames Retransmitted • Signal power set to -60 dBm

  19. Time Spent in Backoff and Busy Sensing of Medium

  20. Detection of Noise • Noise floor sampled every 5 mins. for a period of 5 days in a residential environment.

  21. Hidden Terminal and Capture Effect • Both caused by concurrent transmissions and collisions at the receiver • In the Hidden Terminal case, nodes are not in range and can collide at any time • In Capture Effect, the receivers are not necessarily hidden from one another • Why would they transmit concurrently?

  22. Hidden Terminal and Capture Effect • Contention window set to CWmin (31 usec) on receiving a successful ACK • Backoff interval selected from contention window • Clear Channel Assessment time is 25 usec • 6 usec region of overlap

  23. Hidden Terminal and Capture Effect • Experiment Setup: • Node B higher SNR than node A at AP • Node C not visible to node B or node A • Rate fallback disabled • Node pairs A-B or A-C generating TCP traffic to DEST node • TCP packets varied in size from 256-1024 Bytes • 10 test runs for each payload size, 5.5 and 11 Mbps

  24. Hidden Terminal and Capture Effect • Experimental Results

  25. Detection Algorithm • Executed on a central server • Sliding window buffer of recorded data frames

  26. Detection Accuracy • Time synchronization is essential • 802.11 time synchronization protocol • +/- 4 usec measured error

  27. Long Term Signal Strength Variations of AP • Different hardware = different powers and sensitivities • Transmit power of AP varied, 100mW, 5mW

  28. Detection Algorithm • Signal strength variations observed by one sniffer are not enough to differentiate • Localized events, i.e. fading • Global events, i.e. change in TX power of AP • Multiple distributed sniffers needed • Experiments show three distributed sensors are sufficient to detect correlated changes in signal strength

  29. Observations From Three Sniffers AP Power Reduced

  30. Detection Accuracy vs. AP Signal Strength • AP Power changed once every 5 mins.

  31. Conclusion • MOJO, a unified framework to diagnose physical layer faults in 802.11 based wireless networks. • Experimental results from a real testbed • Information collected used to build threshold based statistical detection algorithms for each fault. • First step toward self-healing wireless networks?

More Related