460 likes | 594 Views
An Overview of Intrusion Detection & Countermeasure Systems – Research Directions Part II. Fernando C. Colon Osorio Computer Science Department Worcester, MA 01609. Outline. Previous Talk – what did we cover last? Definitions A Model of an Intrusion Basic Approaches
E N D
An Overview of Intrusion Detection & Countermeasure Systems – Research DirectionsPart II Fernando C. Colon Osorio Computer Science Department Worcester, MA 01609 PEDS II - 10072002
Outline • Previous Talk – what did we cover last? • Definitions • A Model of an Intrusion • Basic Approaches • Critical research Problems • PEDS – Part II • S.A.F.E Architecture • S.A.F.E. approach to critical research problems • Other Related topics • Conclusions PEDS II - 10072002
Intrusion Detection System – Definition Formal Definition [10], [11] “Intrusion Detection (ID) is the problem of identifying individuals who are using,or attemptingto use a computer system without authorization (i.e., crackers) and those who have legitimate access to the system but are abusing their privileges (i.e., the insider threat”). PEDS II - 10072002
Intrusion Timeline System is Secure/Dependable Œ Mth Nth System is Secure/Dependable MTBASI MTTID MTTCI Attacks Begin MTBSI System is ~ Secure/Dependable Intrusion Detected by IDS and/or IDCS Nth Intrusion Attempt (Success) Mth Intrusion Attempt (Success) 1st Intrusion Attempt Intrusion Countermeasures Launched 2nd Intrusion Attempt PEDS II - 10072002
Intrusion Timeline System is Secure/Dependable Œ Mth Nth System is Secure/Dependable Attack Is Successful MTBASI MTTID MTTCI MTBSI System is ~ Secure/Dependable Intrusion Detected by IDS and/or IDCS Nth Intrusion Attempt (Success) Mth Intrusion Attempt (Success) 1st Intrusion Attempt Intrusion Countermeasures Launched 2nd Intrusion Attempt PEDS II - 10072002
Intrusion Timeline System is Secure/Dependable Œ Mth Nth System is Secure/Dependable Diagnosis Region MTBASI MTTID MTTCI MTBSI System is ~ Secure/Dependable Intrusion Detected by IDS and/or IDCS Nth Intrusion Attempt (Success) Mth Intrusion Attempt (Success) 1st Intrusion Attempt Intrusion Countermeasures Launched 2nd Intrusion Attempt PEDS II - 10072002
Intrusion Timeline System is Secure/Dependable Œ Mth Nth System is Secure/Dependable Repair/ Re-Integration Region MTBASI MTTID MTTCI MTBSI System is ~ Secure/Dependable Intrusion Detected by IDS and/or IDCS Nth Intrusion Attempt (Success) Mth Intrusion Attempt (Success) 1st Intrusion Attempt Intrusion Countermeasures Launched 2nd Intrusion Attempt PEDS II - 10072002
Intrusion Timeline System is Secure/Dependable Œ Mth Nth System is Secure/Dependable MTBASI System Operational MTTID MTTCI MTBSI System is ~ Secure/Dependable Intrusion Detected by IDS and/or IDCS Nth Intrusion Attempt (Success) Mth Intrusion Attempt (Success) 1st Intrusion Attempt Intrusion Countermeasures Launched 2nd Intrusion Attempt PEDS II - 10072002
Anomaly vs. Misuse IDS systems In past years, multiple Intrusion Detection systems have been proposed an implemented. All of the proposed systems are based on one or the other of two basic approaches. • anomaly detection • misuse detection. Note: Kumar [13] presents a fairly complete categorization of the most important systems proposed or build thus far. PEDS II - 10072002
Intrusion Timeline System is Secure/Dependable System is Secure/Dependable Œ Mth Nth Realm of Anomalous Detection Techniques MTBASI MTTID MTTCI Realm of Misuse Detection Techniques MTBSI System is ~ Secure/Dependable Intrusion Detected by IDS and/or IDCS Nth Intrusion Attempt (Success) Mth Intrusion Attempt (Success) 1st Intrusion Attempt Intrusion Countermeasures Launched 2nd Intrusion Attempt PEDS II - 10072002
Figure 1 – Generic Intrusion Detection Model [Denning] Audit Trails/ Network Packets/ Application Trails S = { s1, s2, …, sn } Assert New Rules Modify Existing Rules Event Generator Environment Update Profile Activity Profile Rule Set Generate Anomaly Records Clock Generate New Profile Dynamically PEDS II - 10072002
Problems with Current Approaches • Amongst the most important consideration and limitations present in the design of all such systems are the following set of problems. • Problem # 1: Feature selection and pattern categorization. • Simply stated, in Denning’s Model, Figure 1, it is assumed that the event generator can effectively select, a priori, the set of features or measures to monitor which will render an optimal set for Intrusion Detection. • Problem # 2: the problem of adaptation. • Systems have been build and deployed that deal very effectively with threats or intrusions previously reported or categorized. • When previously unseen threats appear, the systems perform poorly. • In the 1999 DARPA - Off-Line Intrusion Detection Evaluation [14], it was reported that the systems under test failed to detect an attack in 17.2 % PEDS II - 10072002
Problems, contn.. • Problem # 3: Fault Tolerance • Resistance to subversion: Systems do fail due to accidental or malicious activities. • system being designed must be able to recover from the traditional forms of failures such as crashes, software failures, and so forth. • System must be able to protect itself from deliberate attempts to compromise it. • Problem # 4: Performance • System must impose minimal overhead on the system is protecting while running. • System must be capable to sustain its performance characteristics under increasing loads and changes in the pattern of usage. PEDS II - 10072002
Problems, contn.. • Problem # 5: Intrusion Detection System Evaluation & Characterization • Workloads • Definition of “Goodness” – i.e., how reliable in detecting intrusions is the system? • MTBIS • MTTR after an intrusion? • Others? PEDS II - 10072002
Secure Architecture & Fail-safe Engine(S.A.F.E.) S.A.F.E Intrusion Detection & Countermeasure system was conceived with the specific goal of attacking the above problems, and some others. S.A.F.E is a distributed rather than a network or host based system • Its structure is similar to the structure proposed in AAFID [ see Balasubrayan and Garcia-Fernandez, 1998] in the sense that it depends on a set of autonomous objects (agents in AAFID nomenclature) that can reside anywhere in a network. • However, S.A.F.E. differs significantly from AAFID, or other distributed Intrusion Detection systems (see, EMERALD system), in that all control is distributed, and implemented using a system wide distributed system component, called the Trust Manager (TTM), PEDS II - 10072002
Secure Architecture & Fail-safe Engine(S.A.F.E.) S.A.F.E Intrusion Detection & Countermeasure system architecture: • Real time • Distributed • Most existing IDS are either host or network based. Single point of failure, lack of scalability, poor performance • S.A.F.E architecturally/logically partitions the functions of an IDS system into a set of objects that reside anywhere in the system or the network. • Hierarchical • a set of reliable services is provided reliably and securely to the upper layers of the architecture • Fault-Resilient • Internal failures • External attacks, PEDS II - 10072002
Secure Architecture & Fail-safe Engine(S.A.F.E.) S.A.F.E Architectural Components: • Probes := data collection objects. Probes are started/stopped by the Event Generator Object (EGO). • EGO:= independent running entity that provides some of the basic S.A.F.E. services. • Starts/Stop probes • Filters data from Probes. It provides data collection and a data abstraction layer (for common architecture) • Implements lower/level Intrusion detection functions • Provides a Class of Event Service • Secure Host Manager (SHM):= independent running entity that provides two basic services • Intrusion detection for the host • Implements “learning” functions • In addition, the SHM: • Provides the Trust Manager with a “Last Gasp” message – “I am Potentially Compromised” before launching its countermeasure functions. • It stops processing further requests from the Trust Manager – takes itself off-line. • Countermeasure launching. • Starts/Stops all EGO objects PEDS II - 10072002
Secure Architecture & Fail-safe Engine(S.A.F.E.) S.A.F.E System Wide Services: • TTM:= the trust manager. The TTM is an independent entity serving three major functions: • It “knows” which logical “nodes” can be trusted. • This is accomplished through the concept of a trust relationship matrix. • Nodes trust measure tij is added, modified, and changed using a distributed software algorithm • Algorithm is based on the solution to the “Byzantine General’s Problem”. • It delivers the “Last Gasp Message” to other nodes in the system. • Prevents partition of the trust system. • Atomicity Manager (ATOM) := independent entity providing atomic secure operations across the system. • running entity that provide some of the basic S.A.F.E. services. PEDS II - 10072002
Secure Architecture & Fail-safe Engine (S.A.F.E.) P T T M EGO EGO P P T T M EGO SHM SHM P P P SHM P EGO P T T M EGO P P P P SHM P PEDS II - 10072002
S.A.F.E. SHM Intrusion Detection & Learning Engine G(wi ) Lower Lever Intrusion Detection (EGO) wi * Ci (x) C1(x) G(wi ) True/False (Intrusion Correctly Identified) wi2* C2 (x) Å S wi * Ci (x) ( class(x), x ) C2(x) . . . G(wi ) Ck(x) wk * Ck (x) PEDS II - 10072002
Intrusion Detection Models PEDS II - 10072002
A network Model • A trust function Tij (t) for i ¹ j, exist between two nodes, it is not necessarily symmetrical. • The trust function Tij (t) changes over time. • In addition, the lack of trust between two nodes will be denoted as having a trust relationship of zero value, Tij (t) = 0. • In the above example, Node a is the source of the intruder attack, while Node h is the target of the attack. Note that, the path for the intruder is • Path 1: a Ü e Ü g Ü h • Path 2: a Ü b Ü e Ü g Ü h • Path 3: a Ü d Ü g Ü h • This topological constraint amongst nodes in a network has a significant advantage over other approaches. That is, it allows the designer of the IDC System to create multiple logical layers of defense against intruders, in effect, creating time to detect potential intrusions and dwarfed them. • Example • Let’s say that nodes b and e suspect an intrusion by using traditional audit methods. Then, nodes b and e can invoke a state change on their trust relationships with other nodes in such a way that, Taj (t) = 0 for all j ¹ a and t > t of intrusion; and Equation 1: Tej (t) = 0 for all j ¹ e and t > t of intrusion. PEDS II - 10072002
Problems & Well Known Solutions Present in the IDCS field • Problem # 1: Feature selection and pattern categorization. • Simply stated, in Denning’s Model, Figure 1, it is assumed that the event generator can effectively select, a priori, the set of features or measures to monitor which will render an optimal set for Intrusion Detection. PEDS II - 10072002
Figure 1 – Generic Intrusion Detection Model [Denning] Audit Trails/ Network Packets/ Application Trails S = { s1, s2, …, sn } Assert New Rules Modify Existing Rules Event Generator Environment Update Profile Activity Profile Rule Set Generate Anomaly Records Clock Generate New Profile Dynamically PEDS II - 10072002
Learning, Feature Selection, Inductive Learning, Learning with a teacher, KDD systems Given a training set characterized by training samples, T = { ( x1, y1), ( x2, y2), …, ( xn, yn) } For some unknown function ƒ(x) = y Where each xi is an attribute vector of the form Xi = { xi1, xi2, …, xik }, And each yi is the class label belonging to the set Y = { y1, y2, …, ym }, then Find the mapping (function), ƒ*, such that ƒ*(x)» ƒ(x) PEDS II - 10072002
Learning, Feature Selection, Inductive Learning, Learning with a teacher, KDD systems, contn… Three Important and critical questions: Q1: Does an ƒ*(x) exists?, i.e., is there a pattern?, is there knowledge to be mined?, is there learning to be developed?; Q2: What are the correct set of features xik providing “best” inference rules? Q3: If ƒ*(x) exists, then what is the computational complexity of the algorithms trying to find ƒ*(x)? In this context, we are interested in algorithms that can find ƒ*(x)in finite time. Or what is the computational quality, Q c( s), of the algorithm? In this context, given two algorithm that find ƒ*(x), A1 and A2, then Q c( A1) > Q c( A2) if the computational running time of A1 < A2. PEDS II - 10072002
Problems & Well Known Solutions Present in the IDCS field • Problem # 2: the problem of adaptation. • Systems have been build and deployed that deal very effectively with threats or intrusions previously reported or categorized. • When previously unseen threats appear, the systems perform poorly. • In the 1999 DARPA - Off-Line Intrusion Detection Evaluation [14], it was reported that the systems under test failed to detect an attack in 17.2 % PEDS II - 10072002
Figure 1 – Generic Intrusion Detection Model [Denning] Audit Trails/ Network Packets/ Application Trails S = { s1, s2, …, sn } Assert New Rules Modify Existing Rules Event Generator Environment Update Profile Activity Profile Rule Set Generate Anomaly Records Clock Generate New Profile Dynamically PEDS II - 10072002
Figure 2 – A simplified Intrusion Detection Engine S = { s1, s2, …, sn } Decision Engine fg (y, S, M, P(n), T, G ) Environment Clock a = {a1, a2, …, an } Memory of IDS (Rule Set/ Activity Profile Create New Rules/Profiles Modify Existing Rules/Profiles PEDS II - 10072002
Meta - Learning Loose Definition [chang’ 93]: “Learning from learned knowledge” References: [chang’ 93]:= P. Chang and S. Stolfo. “Experiments on multistrategy learning by meta-learning”, Proc. Second International Workshop, Multistrategy Learning, pp. 150-165, 1993. PEDS II - 10072002
Meta – Learning, contn… Let the training set be instances of correct classifications and predictions, such as the Training set is of the form: T = { class( x), C1(x), C2(x), …,Ck(x) | x Î E } PEDS II - 10072002
S.A.F.E. SHM Intrusion Detection & Learning Engine G(wi ) wi * Ci (x) C1(x) G(wi ) True/False (Intrusion Correctly Identified) wi2* C2 (x) Å S wi * Ci (x) ( class(x), x ) C2(x) . . . G(wi ) Ck(x) wk * Ck (x) PEDS II - 10072002
Problems, contn.. • Problem # 3: Fault Tolerance • Resistance to subversion: Systems do fail due to accidental or malicious activities. • The Trust Manager (TTM) • Problem # 4: Performance • Distributed architecture • Light weight processes • Simple objects PEDS II - 10072002
Problems, contn.. • Problem # 5: Intrusion Detection Systems Evaluation & Characterization • Workloads – testing, e.g., the 1999 DARPA evaluation uses 10 workload or traces to compare systems • Results are empirical • Not representative of all environments • Very little data available • Definition of “Goodness” (Modeling) • how reliable in detecting intrusions is the system?, and • how intrusion resilient is the underlying system being protected? PEDS II - 10072002
Intrusion Timeline System is Secure/Dependable Œ Mth Nth System is Secure/Dependable MTTR MTBASI MTTID MTTCI Attacks Begin MTBSI System is ~ Secure/Dependable Intrusion Detected by IDS and/or IDCS Nth Intrusion Attempt (Success) Mth Intrusion Attempt (Success) 1st Intrusion Attempt Intrusion Countermeasures Launched 2nd Intrusion Attempt PEDS II - 10072002
Problem # 5: Intrusion Detection Systems Evaluation & Characterization, contn.. • Definition of “Goodness” – i.e., how reliable is the system with respect to intrusions? • MTBFi – mean time between failures/intrusion (defn: in this context a failure is a successful intrusion) – a measure of the underlying system being protected. • MTTID – mean time to intrusion detection • MTTCI – mean time to countermeasure issuance • MTTR – mean time to contain & repair a successful intrusion • Of course – Availability or A(t) = P ({system is operational and intrusion free at time t1 if it was intrusion free at time t0 } Other metrics: • MTBASI – mean time between 1st attack and successful intrusion PEDS II - 10072002
Problem # 5: Intrusion Detection Systems Evaluation & Characterization, contn.. • Definition of “Goodness” – i.e., how reliable is the system with respect to intrusions? • MTBFi– mean time between failures/intrusion (defn: in this context a failure is a successful intrusion) • A measure of the quality of the software system (O/S, Applications, and so forth) • Further assume that MTBFi aReliability of the software, then, • Techniques such as Component Base Reliability Estimation (CBRE), see Krishnamurthy and Mathur; • Software Test Coverage and Reliability techniques, see Malaiya and Karcich- where software testing & coverage is used as • predictors of software reliability • To estimate the remaining defects or number of residual faults; and • Mean time between failures or bugs. Are all applicable. • Challenge – to define an accurate model!!! PEDS II - 10072002
Other related Topics – Honeypots & Honeynets Honeypot: • A honeypot is a fake or false system to lure the hacker into. It provides another obstacle for the hacker. • honeypot systems are decoy servers or systems set up to gather information regarding an attacker or intruder into your system. • honeypot traps tempt intruders into areas which appear attractive, worth investigating and easy to access, taking them away from the really sensitive areas of your systems. They do not replace other traditional Internet security systems but act as an additional safeguard with alarms. • A honeypot is a resource which pretends to be a real target. A honeypot is expected to be attacked or compromised. The main goals are the distraction of an attacker and the gain of information about an attack and the attacker. PEDS II - 10072002
honeypots honeypots will help you: • notice when you are penetrated • learn how attacks are formed • identify who is attacking you PEDS II - 10072002
honeypot Examples • honeypot Project • http://www.landfield.com/isn/mail-archive/2000/Nov/0124.html • Deception Tool Kit Project • http://www.all.net/dtk/index.html • Specter • http://www.specter.com/default50.htm PEDS II - 10072002
“Specter” – Basic Idea • Virtual Machine (VM) environment • Early Traps • Early detection PEDS II - 10072002
honeypot Tools – “Specter” PEDS II - 10072002
Honeypots Limitations • Hard to Maintain • Human Resource Intensive – Specialize Knowledge • Operating Systems • Network security • Current deficiencies (holes) in both O/S and applications PEDS II - 10072002
Honeynet Honeynet Ì Honeypots Honeynet (Defn) • A network system • All systems are standard production systems • All usage is ~ Production PEDS II - 10072002
Honeynet PEDS II - 10072002
Conclusions • A new model based on Byzantine General’s problem will be investigated. • Research Area is prime for discovery. PEDS II - 10072002