1.08k likes | 1.56k Views
Testing IDS. Testing IDS. Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available to test IDS. Quantitative IDS performance measurement results are essential in order to compare different systems. Testing IDS.
E N D
Testing IDS • Despite the enormous investment in IDS technology, no comprehensive and scientifically rigorous methodology is available to test IDS. • Quantitative IDS performance measurement results are essential in order to compare different systems.
Testing IDS • Quantitative results are needed by: • Acquisition managers – to improve the process of system selection. • Security analysts – to know the likelihood that the alerts produced by IDS are caused by real attacks that are in progress. • Researchers and developers – to understand the strengths and weaknesses of IDS in order to focus research efforts on improving systems and measuring their progress.
Testing IDS • Quantitatively measurable IDS characteristics: • Coverage • Probability of false alarms • Probability of detection • Resistance to attacks directed at the IDS • Ability to handle high bandwidth traffic • Ability to correlate events • Ability to detect new attacks
Testing IDS • Quantitatively measurable IDS characteristics (cont.): • Ability to identify an attack • Ability to determine attack success • Capacity verification (NIDS).
Testing IDS • Coverage • Determines which attacks an IDS can detect under ideal conditions. • For misuse (signature based) systems • Counting the number of signatures and mapping them to a standard naming scheme. • For anomaly detection systems • Determining which attacks out of the set of all known attacks could be detected by a particular methodology.
Testing IDS • Coverage (cont.) • The problem with determining coverage of an IDS lies in the fact that various researchers characterize the attacks by different numbers of parameters.
Testing IDS • Coverage (cont.) • These characterizations may take into account the particular goal of the attack (DoS, penetration, scanning, etc.), the software, protocol and/or OS against which it is targeted, the victim type, the data to be collected in order to obtain the evidence of the attack, the use or not of IDS evasion techniques, etc. • Combinations of these parameters are also possible.
Testing IDS • Coverage (cont.) • The consequence of these differences are coarse granularity attack definitions and finer granularity attack definitions. • Because of the disparity in granularity, it is difficult to determine attack coverage of an IDS precisely.
Testing IDS • Coverage (cont.) • CVE is an attempt to alleviate this problem. • But the CVE approach does not work either, if multiple attacks are used to exploit the same vulnerability using different approach (for example to evade IDS systems).
Testing IDS • Coverage (cont.) • Determining the importance of different attack types is also a problem when determining coverage. • Different environments may assign different costs and importance to detecting different types of attacks. • Example: • An e-commerce site may not be interested in surveillance attacks, but may be very interested in detecting DDoS attacks.
Testing IDS • Coverage (cont.) • Example (cont.): • A military site may be especially interested in detecting surveillance attacks in order to prevent more serious attacks by acting in their early phases. • Another problem with coverage is in determining which attacks to cover regarding system updates.
Testing IDS • Coverage (cont.) • Example: • It is worthless to test IDS coverage of the attacks against the defended system in which the measures against these attacks have already been applied (patching, hardening, etc.)
Testing IDS • Probability of false alarms • Suppose that we haveN IDS decisions, of which: • In TP cases: intrusion – alarm. • In TN cases: no intrusion – no alarm. • In FP cases: no intrusion – alarm. • In FN cases: intrusion – no alarm. • Total intrusions: TP+FN • Total no-intrusions: FP+TN • N=TP+FN+FP+TN • Base-rate – the probability of an attack:
Testing IDS • Probability of false alarms (cont.) • Events: Alarm A, Intrusion I • The following rates are defined: • True positive rate TPR • True negative rate TNR
Testing IDS • Probability of false alarms (cont.) • False positive rate FPR • False negative rate FNR
Testing IDS • Probability of false alarms (cont.) • This measure determines the rate of false positives produced by an IDS in a given environment during a particular time frame.
Testing IDS • Probability of false alarms (cont.) • Typical causes of false positives: • Weak signatures (alert on all traffic to a specific port, search for the occurrence of a common word such as ”help” in the first 100 bytes of SNMP or other TCP connections, alert on common violations of the TCP protocol, etc.) • Normal network monitoring and maintenance traffic.
Testing IDS • Probability of false alarms (cont.) • Difficulties regarding measuring of false alarm rate: • An IDS may have a different false positive rate in different network environments, and “standard network” does not exist. • It is difficult to determine aspects of network traffic or host activity that will cause false alarms. • Consequence: it is difficult to guarantee that it is possible to produce the same number and type of false alarms in a test network as in a real network.
Testing IDS • Probability of false alarms (cont.) • Difficulties regarding measuring of false alarm rate (cont.): • IDS can be configured in many ways and it is difficult to determine which configuration of an IDS should be used for a particular false positive test.
Testing IDS • Probability of detection • This measurement determines the rate of attacks detected correctly by an IDS in a given environment during a particular time frame.
Testing IDS • Probability of detection • Difficulties in measuring probability of detection: • The success of an IDS is largely dependent upon the set of the attacks used during the test. • The probability of detection varies with the false positive rate – the same configuration of the IDS must be used for testing for false positives and hit rates.
Testing IDS • Probability of detection (cont.) • Difficulties in measuring probability of detection (cont.): • A NIDS can be evaded by using the stealthy versions of attacks (fragmenting packets, using data encoding, using unusual TCP flags, encrypting attack packets, spreading attacks over multiple network sessions, launching attacks from multiple sources, etc.) • This reduces the probability of detection, even though the same attack would be detected if no stealthy version would be applied.
Testing IDS • Resistance to attacks directed at the IDS • This measurement demonstrates how resistant an IDS is to an attacker’s attempt to disrupt the correct operation of the IDS.
Testing IDS • Resistance to attacks directed at the IDS • Some typical attacks against IDS: • Sending a large amount of non-attack traffic with volume exceeding the IDS processing capability – this causes dropping packets by the IDS. • Sending to the IDS non-attack packets that are specially crafted to trigger many signatures within the IDS – the human operator is overwhelmed with false positives, or an automated analysis tools crashes.
Testing IDS • Resistance to attacks directed at the IDS (cont.) • Some typical attacks against IDS (cont.): • Sending to the IDS a large number of attack packets intended to distract the human operator, while the attacker launches a real attack hidden among these “false attacks”. • Sending to the IDS packets containing data that exploit a vulnerability within the very IDS processing algorithms. Such vulnerabilities may be consequence of coding errors.
Testing IDS • Ability to handle high bandwidth traffic • This measurement demonstrates how well an IDS will function when presented with a large volume of traffic. • Most NIDS start to drop packets as the traffic volume increases – false negatives. • At certain threshold, most IDS will stop detecting any attacks.
Testing IDS • Ability to correlate events • This measurement demonstrates how well an IDS correlates attack events. • These events may be gathered from IDS, routers, firewalls, application logs, etc. • One of the primary goals of event correlation is to identify penetration attacks. • Currently, IDS have limited capabilities in this area.
Testing IDS • Ability to detect new attacks • This measurement demonstrates how well an IDS can detect attacks that have not occurred before. • Signature-only based systems will have 0 score here. • Anomaly-based systems may be suitable for this type of measurement. However, they in general produce more false alarms than the signature-based systems.
Testing IDS • Ability to identify an attack • This measurement demonstrates how well an IDS can identify the attack that it has detected. • Each attack should be labelled with a common name or vulnerability name, or by assigning the attack to a category.
Testing IDS • Ability to determine attack success • This measurement demonstrates if the IDS can determine the success of attacks from remote sites that give the attacker higher-level privileges on the attacked system. • Many remote privilege-gaining attacks (probes) fail and do not damage the attacked system. • Many IDS do not distinguish between unsuccessful and successful attacks.
Testing IDS • Ability to determine attack success (cont.) • For the same attack, some IDS can detect the evidence of damage and some IDS detect only the signature of attack actions. • The ability to determine the attack success is essential for the analysis of attack correlation and the attack scenario. • Measuring this capability requires the information about both successful and unsuccessful attacks.
Testing IDS • Capacity verification for NIDS • The NIDS demand higher-level protocol awareness than other network devices (switches, routers, etc.) • NIDS inspect more deeply the network packets than the other devices do. • Therefore, it is important to measure the ability of a NIDS to capture, process and perform at the same level of accuracy under a given network load as it does on a quiescent network.
Testing IDS • Capacity verification for NIDS (cont.) • There exists a standardized capacity benchmarking methodology for NIDS (e.g. CISCO has its own methodology). • The NIDS customers can use the standardized capacity test results for each metric and a profile of their networks to determine if the NIDS is capable of inspecting their traffic.
Challenges of IDS testing • The following problems (at least) make IDS testing a challenging task: • Collecting attack scripts and victim software is difficult. • Requirements for testing signature-based and anomaly-based IDS are different. • Requirements for testing host-based and network-based IDS are different. • Using background traffic in IDS testing is not standardized.
Challenges of IDS testing • Collecting attack scripts and victim software. • It is difficult and expensive to collect a large number of attack scripts. • The attack scripts are available in various repositories, but it takes time to find relevant scripts to a particular testing environment.
Challenges of IDS testing • Collecting attack scripts and victim software (cont.) • Once an adequate script is identified, it takes approx. one person-week to review the code, test the exploit, determine where the attack leaves evidence, automate the attack and integrate it into a testing environment.
Challenges of IDS testing • Different requirements for testing signature-based and anomaly-based IDS • Most commercial systems are signature-based. • Many research systems are anomaly based.
Challenges of IDS testing • Different requirements for testing signature-based and anomaly-based IDS (cont.) • An ideal IDS testing methodology would be applicable to both signature-based and anomaly-based systems. • This is important because the research anomaly-based systems should be compared to the commercial signature-based systems.
Challenges of IDS testing • Different requirements for testing signature-based and anomaly-based IDS (cont.) • The problems with creating a single test to cover both type of systems: • Anomaly based systems with learning require normal traffic for training that does not include attacks. • Anomaly based systems with learning may learn behaviour of the testing methodology and perform well without detecting real attacks at all.
Challenges of IDS testing • Different requirements for testing signature-based and anomaly-based IDS (cont.) • The problems with creating a single test to cover both type of systems (cont.): • This may happen when all the attacks in a test are launched from a particular user, IP address, subnet, or MAC address.
Challenges of IDS testing • Different requirements for testing signature-based and anomaly-based IDS (cont.) • The problems with creating a single test to cover both type of systems (cont.): • Anomaly-based systems with learning can also learn subtle characteristics difficult to predetermine (packet window size, ports, typing speed, command set used, TCP flags, connection duration, etc.) – artificially perform well in the test environment.
Challenges of IDS testing • Different requirements for testing signature-based and anomaly-based IDS (cont.) • The problems with creating a single test to cover both type of systems (cont.): • The performance of a signature based system in a test will, to a large degree, depend on the set of attacks used in the test. • Then the decision about which attacks to include in a test may be in favour of a particular IDS – not objective.
Challenges of IDS testing • Different requirements for testing host-based and network-based IDS • Testing host-based IDS presents some difficulties not present when testing network-based IDS: • Network-based IDS can be tested off-line by creating a log file containing TCP traffic and replaying that traffic to IDS – this is convenient, because there is no need to test all the IDS at the same time. • Repeatability of the test is easy to achieve.
Challenges of IDS testing • Different requirements for testing host-based and network-based IDS (cont.) • Testing host-based IDS presents some difficulties not present when testing network-based IDS (cont.): • Host-based IDS use a variety of system inputs in order to determine whether or not a system is under attack. • This set of inputs is not the same for all IDS. • Host-based IDS monitor a host, not a single data feed. • Then it is difficult to replay activity from log files.
Challenges of IDS testing • Different requirements for testing host-based and network-based IDS (cont.) • Testing host-based IDS presents some difficulties not present when testing network-based IDS (cont.): • Since it is difficult to test a host-based IDS off-line, an on-line test should be performed. • Consequence: problems of repeatability.
Challenges of IDS testing • Using Background traffic in IDS testing • Four approaches: • Testing using no background traffic/logs • Testing using real traffic/logs • Testing using sanitized traffic/logs • Testing using simulated traffic/logs. • It is not clear which approach is the most effective for testing IDS. • Each of the four approaches has unique advantages and disadvantages.
Challenges of IDS testing • Using Background traffic in IDS testing (cont.) • Testing using no background traffic/logs • This testing may be used as a reference condition. • An IDS is set up on a host/network on which there is no activity. • Then, computer attacks are launched on this host/network to determine whether or not the IDS can detect them. • This technique can determine the probability of detection (hit rate) under no load, but it cannot determine the false positive rate.
Challenges of IDS testing • Using Background traffic in IDS testing (cont.) • Testing using no background traffic/logs (cont.) • Useful for verifying that an IDS has signatures for a set of attacks and that the IDS can properly label each attack. • Often much less costly than other approaches. • Drawback: tests using this technique are based on the assumption that an IDS ability to detect an attack is the same regardless of the background activity.
Challenges of IDS testing • Using Background traffic in IDS testing (cont.) • Testing using no background traffic/logs (cont.) • At low levels of background activity, that assumption is probably true. • At high levels of background activity, the assumption is often false since the IDS performances degrade at high traffic intensities.