300 likes | 500 Views
Learning Rules from System Call Arguments and Sequences for Anomaly Detection. Gaurav Tandon and Philip Chan Department of Computer Sciences Florida Institute of Technology. Overview. Related work in system call sequence-based systems Problem Statement
E N D
Learning Rules from System Call Arguments and Sequences for Anomaly Detection Gaurav Tandonand Philip Chan Department of Computer Sciences Florida Institute of Technology
Overview • Related work in system call sequence-based systems • Problem Statement – Can system call arguments as attributes improve anomaly detection algorithms? • Approach • LERAD ( a conditional rule learning algorithm) • Variants of attributes • Experimental evaluation • Conclusions and future work
Related Work • tide (time-delay embedding) Forrest et al, 1996 • stide (sequence time-delay embedding) Hofmeyr et al, 1999 • t-stide (stide with frequency threshold) Warrender et al, 1999 • Variable length sequence-based techniques (Wespi et al, 1999, 2000; Jiang et al, 2001) False Alarms !!
Problem Statement Current models – system call sequences What else can we model? System call arguments open(“/etc/passwd”) open(“/users/readme”)
Approach • Models based upon system calls • 3 sets of attributes - system call sequence • system call arguments • system call arguments + sequence • Adopt a rule learning approach - Learning Rules for Anomaly Detection (LERAD)
Learning Rules for Anomaly Detection (LERAD) [Mahoney and Chan, 2003] A, B, and X are attributes a, b, x1, x2 are values to the corresponding attributes p - probability of observing a value not in the consequent r - cardinality of the set {x1, x2, …} in the consequent n - number of samples that satisfy the antecedent
Overview of LERAD 4 steps involved in rule generation: • From a small training sample, generate candidate rules and associate probabilities with them • Coverage test to minimize the rule set • Update rules beyond the small training sample • Validating rules on a separate validation set
Step 1a: Generate Candidate Rules • Two samples are picked at random (say S1 and S2) • Matching attributes A, B and C are picked in random order (say B, C and A) • These attributes are used to form rules with 0, 1, 2 conditions in the antecedent
Step 1b: Generate Candidate Rules • Adding values to the consequent based on a subset of the training set (say S1-S3) • Probability estimate p associated with every rule when it is violated ( instead of in each rule) • Rules are sorted in increasing order of the p
Step 2: Coverage Test • Obtain minimal set of rules
Step 2: Coverage Test • Obtain minimal set of rules
Step 3: Updating rules beyond the training samples • Extend rules to the entire training (minus validation) set (samples S1-S5)
Step 4: Validating rules • Test the set of rules on the validation set (S6) • Remove rules that produce anomaly
Step 4: Validating rules • Test the set of rules on the validation set (S6) • Remove rules that produce anomaly
Learning Rules for Anomaly Detection (LERAD) Non-stationary model - only the last occurrence of an event is important t - time interval since the last anomalous event i - index of the rule violated
Variants of attributes • 3 variants • S-LERAD: system call sequence • A-LERAD: system call arguments • M-LERAD: system call arguments + sequence
S-LERAD • System call sequence-based LERAD • Samples comprising 6 contiguous system call tokens input to LERAD
A-LERAD • Samples containing system call along with arguments • System call will always be a condition in the antecedent of the rule
M-LERAD • Combination of system call sequences and arguments
1999 DARPA IDS Evaluation [Lippmann et al, 2000] • Week 3 – Training data (~ 2.1 million system calls) • Weeks 4 and 5 – Test Data (over 7 million system calls) • Total – 51 attacks on the Solaris host
Experimental Procedures • Preprocessing the data: BSM audit log Applications Processes • Model per application • Merge all alarms Pj Pi Pk … Application 1 Application 2 Application N
Evaluation Criteria • Attack detected if alarm generated within 60 seconds of occurrence of the attack • Number of attacks detected @ 10 false alarms/day • Time and storage requirements
Storage Requirements • More data extracted (system calls + arguments) – more space • Only during training – can be done offline • Small rule set vs. large database (stide, t-stide) • e.g. for tcsh application: 1.5 KB file for the set of rules (M-LERAD) 5 KB for sequence database (stide)
Summary of contributions • Introduced argument information to model systems • Enhanced LERAD to form rules with system calls as pivotal attributes • LERAD with argument information detects more attacks than existing system call sequence based algorithms (tide, stide, t-stide). • Sequence + argument based system generally detected the most attacks with different false alarm rates • Argument information alone can be used effectively to detect attacks at lower false alarm rates • Less memory requirements during detection as compared to sequence based techniques
Future Work • More $$$$$$$$$$
Future Work • A richer representation More attributes - time between subsequent system calls • Anomaly score t-stide vs. LERAD