570 likes | 734 Views
Correlating Network Attacks Using Bayesian Multiple Hypothesis Tracking. Daniel J. Burroughs Institute for Security Technology Studies Thayer School of Engineering Dartmouth College. May 1, 2002. Outline. Institute for Security Technology Studies Needs and goals System overview
E N D
Correlating Network Attacks Using Bayesian Multiple Hypothesis Tracking Daniel J. Burroughs Institute for Security Technology StudiesThayer School of EngineeringDartmouth College May 1, 2002
Outline • Institute for Security Technology Studies • Needs and goals • System overview • Sensor Modeling • Attacker Modeling • Hypothesis Management • Testing and Evaluation • Summary and Future Work
Institute for Security Technology Studies • Security and counter-terrorism research center • Funded by the NIJ • Main focus is on computer security • Investigative Research for Infrastructure Assurance (IRIA) • Joint effort with Thayer School of Engineering
Outline • Institute for Security Technology Studies • Needs and goals • System overview • Sensor Modeling • Attacker Modeling • Hypothesis Management • Testing and Evaluation • Summary and Future Work
The Internet and Security in a Nutshell IDS IDS ALERT! ALERT!
What is the Need? • Distributed and/or coordinated attacks • Increasing rate and sophistication • Infrastructure protection • Coordinated attack against infrastructure • Attacks against multiple infrastructure components • Overwhelming amounts of data • Huge effort required to analyze • Lots of uninteresting events
Outline • Institute for Security Technology Studies • Needs and goals • System overview • Sensor Modeling • Attacker Modeling • Hypothesis Management • Testing and Evaluation • Summary and Future Work
What is the System? SHADOW RealSecure Tracking System Snort Security Database • Reorganization of existing data • Data fusion • Building situational knowledge • Not an intrusion detection system
Network Centered View • Network viewed in isolation • Limited view of attacker’s activity • Defensive posture
Distributed Attack Denial of Service
Attacker Centered View • More complete picture • Information gathering • Requires cooperation and data fusion
Radar Tracking RealSecure Snort SHADOW • Multiple sensors • Multiple targets • Heterogeneous sensors • Real-time tracking • Incomplete data • Inaccurate data
Gather and Correlate • Collecting data • Time correlation, communications, common formatting, etc. • These issues are addressed by numerous projects • IDEF, IDMEF, CIDF, D-Shield, Incidents.org, etc. • Correlating data • How can we tell what events are related? • Attacker’s goals determine behavior • Multiple hypothesis tracking
Multiple Hypothesis Tracking PortScan BufferOverflow PortScan PortScan Attack 1: Attack 1: • Scenario created Stream 1 • Alternate hypothesis BufferOverflow Attack 2: • Events analyzed on arrival BufferOverflow OR
Hypothesis Evaluation • Hypotheses are evaluated based on the behaviors of the sensor and target • What real-world event caused the given sensor output? • How likely is it that the target moved to this position?
Outline • Institute for Security Technology Studies • Needs and goals • System overview • Sensor Modeling • Attacker Modeling • Hypothesis Management • Testing and Evaluation • Summary and Future Work
IDS Overview • Two methods of intrusion detection • Signature detection (pattern matching) • Low false positive / Detects only known attacks • Statistical anomaly detection • High false positive / Detects wider range of attacks • Two domains to be observed • Network • Host
Signature Detection vs. Anomaly Detection • Modeling signature detection is easy • If a known attack occurred in an observable area, then p(detection) = 1, else p(detection) = 0 • Modeling anomaly detection is more difficult • Noisy and/or unusual attacks are more likely seen • Denial of Service, port scans, unused services, etc. • Other types of attacks may be missed • Malformed web requests, some buffer overflows, etc.
Event Measurements • Minimal feature set is extracted from reports • Source IP, destination IP • Source port, destination port • Type of attack • Time • These are then used to describe a hyperspace through which the attack moves
Bayesian Inference • Forward response of sensor is well known • Given real-world event x, what is H(x)? • We need to reason backwards • Given sensor output H(x), what is x? • Forward response and prior distribution of x • Probability of H(x) given x • Probability of a particular x existing
Outline • Institute for Security Technology Studies • Needs and goals • System overview • Sensor Modeling • Attacker Modeling • Hypothesis Management • Testing and Evaluation • Summary and Future Work
Attacker Model • Attackers are not as easy to observe • Often we are only able to observe them through the sensors (IDS) • State of the attack is difficult to describe • We have three sources of attack data • Simulation • Dartmouth / Thayer network • Def Con
Simulation • Purely generated data • Models for generating attack sequences and noise • Highly controllable – good for development • Generated attacks with ‘background noise’ • Use Thayer IDS for background noise • More interesting for testing
Dartmouth / Thayer Network ISTS Snort SignalQuest SHADOW Switch Snort Snort SHADOW Switch Switch Switch
Def-Con Capture-The-Flag • Hacker game • Unrealistic data in some aspects • Lack of stealth, lack of firewall, etc. • Many attacks, many scenarios • 16,250 events in 2.5 hours • 89 individual scenarios • Classified by Oliver Dain at Lincoln Labs
State Problem • Desire to describe state as Markovian process • Reduces computational complexity and space • Easy for an aircraft, difficult for an attack • Non-linear, non-contiguous space ? X, Y, Z Yaw, Pitch, Roll Position & Velocity
State Problem • No simple method for describing state • Use a history of events in the track • Increases computational complexity • Increases memory requirements • Use a weighted window of past events • Calculate various relationships between past and current events.
Windowed History • Minimum history needed to differentiate state • Weighting of events to lend more value to recent events • Relationships calculated between pairs and sequences of events Xt-6 Xt-5 Xt-4 Xt-3 Xt-2 Xt-1 Xt
Common History 1a • Don’t care which path was taken • Just need to distinguish current state State1 2a 1b 2b State2 1c 2c
Predictive Model • To determine likelihood of event belonging in series, predictive models are needed • Based on current state, what is the probability distribution for the target motion? • Different types of attacks have different distributions
Attacker Motion Probability Distributions Motion update for scanning Motion update for DoS(Denial of Service) Events are readily distinguishable based on arrival time and source IP distance
Feature Extraction • Historical data sets used to determine good differentiating feature sets • These are used in combination to measure the fitness of new events to scenarios • Use neural net to discover complex patterns
Neural Net • Empirically derived probability distributions work well for simple attacks • But is difficult to compute for more complex ones • Machine learning is applied to solve this • Neural net feeds from event feature set values • Fitness function is calculated from this
Neural Net • Fitness functions created for various feature subsets • i.e., rate of events vs. IP source velocity • These values feed a neural net • NN then determines overall fitness value
Outline • Institute for Security Technology Studies • Needs and goals • System overview • Sensor Modeling • Attacker Modeling • Hypothesis Management • Testing and Evaluation • Summary and Future Work
Hypothesis Management • In the brute-force approach, each new event doubles the number of hypotheses • Without pruning, complexity grows exponentially
Branch and Prune • Calculate all possible hypotheses • Prune back unlikely or completed ones • Must be very aggressive in pruning • Many hypotheses are not kept long • Inefficient method of controlling growth
Selective Branching • Often times, there is a clear winner • Why bother creating hypotheses for other? • Measure difference between fitness of top choice and fitness of second choice • If it is greater than a predetermined threshold, no branching is needed • Number of branches can be determined with threshold
Preprocessing and Multi-pass • Some sequences of events are simply related • Port scans • Noisy • Many events • Require many evaluations • Easily grouped • Preprocessing groups these into single larger events
Multi-Pass Approach a b c d f g h k l m • Develop small attack sequences initially • Chain sequences together in later passes • Small sequences become atomic events • May aid ‘missing data’ problem a-b-c-d f-g-h k-l-m a-b-c-d-f-g-h-k-l-m
Outline • Institute for Security Technology Studies • Needs and goals • System overview • Sensor Modeling • Attacker Modeling • Hypothesis Management • Testing and Evaluation • Summary and Future Work
Testing and Evaluation • Testing has been performed with data collected from the Thayer network and DefCon data sets • Thayer testing used earlier probability distribution method • DefCon testing used machine learning approach • Arranging for a live run at DefCon
Thayer Testing and Evaluation • Testing performed on Thayer data • Roughly 1500 events • 20 Scenarios • Roughly half of data were single events
Thayer Testing and Evaluation • Accuracy measured by number of correctly placed scenario events • Best hypothesis had ~20% of the single events included in tracks • Most confident hypothesis not always most accurate
DefCon Testing and Evaluation • Testing performed on DefCon data • 2.5 Hour time slice • Roughly 16,000 events • 89 Scenarios • Hand classified by Oliver Dain at Lincoln Labs • Neural net approach used • Trained with random time slice of data
DefCon Testing and Evaluation • Testing performed on DefCon data • 2.5 Hour time slice • Roughly 16,000 events • 89 Scenarios • Hand classified by Oliver Dain at Lincoln Labs • Neural net approach used • Trained with random time slice of data
DefCon Testing and Evaluation From Dain & Cunningham (October, 2001)
DefCon Testing and Evaluation • Accuracy measured by number of correctly placed scenario events • Achieved higher accuracy, but less stable with fewer hypotheses
Outline • Institute for Security Technology Studies • Needs and goals • System overview • Sensor Modeling • Attacker Modeling • Hypothesis Management • Testing and Evaluation • Summary and Future Work
Summary • Reorganize data already being collected • Provide ‘Higher level’ view of situation • Reduce the work of the security analyst • Radar tracking analogy • Multisensor data fusion • Multiple hypothesis tracking