320 likes | 527 Views
563.9.1 Denial of Service Attacks Classification/Taxonomy. Presented by: Roger Fliege DoS Group: Fariba Khan, Omid Fatemieh, Roger Fliege University of Illinois Spring 2006. Overview of DoS/DDoS. Why are DoS attacks hard to defend against? end-to-end paradigm
E N D
563.9.1Denial of Service AttacksClassification/Taxonomy Presented by: Roger Fliege DoS Group: Fariba Khan, Omid Fatemieh, Roger Fliege University of Illinois Spring 2006
Overview of DoS/DDoS • Why are DoS attacks hard to defend against? • end-to-end paradigm • sender & receiver responsible for security • network optimized for simply making best effort packet delivery - doesn't police traffic • internet security is highly inter-dependent • internet resources are limited • intelligence and resources are not collocated • accountability is not enforced • control is distributed
Overview of DoS/DDoS • DoS vs. DDoS: multiple streams of attack packets vs. sending a few malformed packets • Appropriate defense may depend recognizing which on it is • Possibly recognized by: • attack packet header info • IP Address, or Fragment ID and TTL fields • attack packet stream dynamics • ramp-up behavior • slower ramp-up implies multiple attackers • spectral analysis • frequency analysis of packet trace Hussain, Heidemann, Papdopoulos, 03
Overview of DoS/DDoS • DDoS • entities: attacker, [masters], agents, target • stages: • recruit - scan potentially vulnerable hosts • exploit - compromise a vulnerable host using some exploitable vulnerability • infect - propagate the attack code to the new agent • attack – use attack code to inflict denial of service
DDoS: a taxonomy • Provides a map of the field • can help structure research efforts • provides a common vocabulary • may identify unexplored research areas • can speak to the completeness (or otherwise) of a proposed defense • helps to "think like" someone designing a DoS attack • insight into design may lead to insight in defense • more complete understanding may enable us to anticipate features of new attacks Mirkovic, Reiher 04
Categories • Degree of Automation • Agent Recruitment Strategies • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Degree of Automation • Manual • attacker manually scans, breaks in, installs attack code, then directs the attack • Used by early DDoS attacks only • Fully Automated • exploit/recruitment phase and attack phase both automated • everything is preprogrammed in advance • no need for further communication between master & agent • minimal exposure for attacker • inflexible - attack specification is hard coded • hybrid of auto/semi-auto • fully programmed in advance for auto, but leave a backdoor for future modification
Degree of Automation • Semi-Automated • recruitment phase automated, attack phase manually initiated • requires communication between master & agents to initiate attack: • direct communication • network packets exchanged between master & agent • need to know each other's IP address • adds to risk of discovery • if agent is actively listening, maybe a network scanner can find • may only listen at prearranged times • indirect communication • use some pre-existing legitimate communication channel • IRC commonly used • discovery of agent may only tell us IRC server & channel • channel hopping used to further disguise
Categories • Degree of Automation • Agent Recruitment Strategies Scanning Strategy - Vulnerability Scanning - Attack Code Propagation • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Agent Recruitment - scanning strategy • Random Scanning (Code Red) • high traffic volume of inter-network traffic - may aid detection • no coordination - increases likelihood of duplicate scans • Hit List • splits off pieces of the list to give to newly recruited machines • can be very fast and efficient - no collisions • a large list will cause more traffic, possibly aiding detection • Permutation Scanning • if an agent sees an already infected host, it chooses a new random starting point • if an agent sees a certain threshold number of infected hosts, it becomes dormant • Signpost Scanning • uses communication patterns or data found on newly infected hosts to select next targets • any email worm that spreads using address book of infected host • hard to detect based on traffic patterns • may be slow to spread • Local Subnet (code red II, nimda)
Agent Recruitment - vulnerability scanning • Horizontal • looks for specific port/vulnerability • Vertical • look for multiple ports/vulnerabilities on the same host • Coordinated • scan multiple machines on the same subnet for a specific vulnerability • Stealthy • any of the above, but do it slowly to avoid detection
Agent Recruitment - attack code propagation • Central Server (li0n worm) • all newly recruited agents contact a central server to get attack code • single point of failure • can be discovered and shut down • high load at central server may limit efficiency or enable detection • Back-chaining (ramen, morris worms) • attack code downloaded from machine that was used to exploit the new host • Autonomous – (Code Red, Warhol, various email worms) • attack code downloaded concurrently w/exploit
Categories • Degree of Automation • Agent Recruitment Strategies • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Exploited Weakness • Semantic (TCP SYN, NAPTHA) • exploits a specific feature or bug of a protocol or application on the victim in order to consume excessive amounts of its resources • can potentially be mitigated by deploying modified protocols/applications • Brute Force • intermediate network has more resources than victim - can deliver higher volume of packets than victim can handle • overwhelms victim resources using seemingly legitimate packets • hard to filter without also harming legitimate traffic • requires higher volume of attack packets • modifying protocols to counter semantic attacks raises the bar somewhat for the attacker
Exploited Weakness • Is it Semantic or Brute Force? • some attacks have the capacity to act like either one • semantic attack like TCP SYN may be countered by protocol modification • if attack is large enough, can still overwhelm through brute force • some attacks are a combination of both (SMURF) • a protocol feature is exploited at a server (not the victim), which then overwhelms the intended target through brute force • Packet Features • the exploited resource may determine some characteristics of the packets • if the packets must contain some valid header & payload content, may be easier to detect & filter • some attacks (more often semantic) must have some valid packet features • aimed at one particular weakness • however, if aim is just to consume network resources, packet features can be varied at will • harder to detect & filter
Categories • Degree of Automation • Agent Recruitment Strategies • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Source Address Validity • Spoofed Address • avoids accountability, helps avoid detection • required for reflector attacks • makes brute force attacks hard • otherwise, could manage by intelligently allocating resources devoted to various flows • Valid Address • some attacks (NAPTHA) require a valid source address, since the attack mechanism requires several request/reply exchanges between agent & victim • older Windows (NT) didn't allow user-level processes to modify packet headers
Source Address Validity • Types of Spoofed Addresses • Routable vs. Non-Routable • Fixed • reflector attacks, or an attack trying to place blame on a 3rd party • Random • filtering techniques can be useful • Subnet • choose an address randomly from same subnet as agent • defeats ingress filtering • subnet where agent is located may be able to detect & filter • En Route • choose address from some host on the route from agent to victim • not used by any known attack, but foreseeable, since it counters some existing filtering techniques RFC2827 , Park, Lee 01
Reflector Attacks • Attacker sends packets to some (non-hostile) intermediate entity • spoofed source address of the packets is the victim’s IP address • response from the intermediate entities overwhelms the victim • SMURF (1998) • ICMP echo requests sent to various IP broadcast addresses • amplifier effect: many responses from a single packet • Feb. 2000 attack against Yahoo was based on SMURF • DNS Reflector Flood (2000) • agents generate a large number of DNS requests, with the spoofed source address of the victim • amplifier effect: DNS responses can be significantly larger than the DNS request CERT: Advisory CA-1998-01,Incident Note IN-2000-04
Categories • Degree of Automation • Agent Recruitment Strategies • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Attack Rate Dynamics • Constant Rate (most) • agents send packets as fast as they can after attack is started • large traffic stream may aid detection • Variable Rate • used in an attempt to avoid or delay detection • Increasing Rate • start slow, gradually increase, perhaps over long period of time • harder to distinguish from a legitimate increase in traffic • Fluctuating Rate • could respond to victim behavior or preprogrammed timing • could be used to pulse the attack intensity • agents could coordinate pulsing, so attack intensity is steady, but set of agents attacking at any one time varies • makes it harder to detect & mitigate at the source network of the agent
Categories • Degree of Automation • Agent Recruitment Strategies • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Possibility of Characterization • Characterizable • Filterable vs. Non-Filterable • Filterable: • packets may be malformed • protocol or application may not be needed by target • ex: UDP flood against a web server, http flood against an SMTP server • traffic can be filtered by a firewall • Non-Filterable: • well formed packets that request legitimate/critical services • no way to distinguish attack packets from legitimate service requests • ex: http flooding a web server • Non-characterizable • attack packets use variety of protocols/applications • may be randomly generated • some attacks characterizable in theory, but not in practice
Categories • Degree of Automation • Agent Recruitment Strategies • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Persistence of Agent Sets • constant - all agents behave the same • engage simultaneously in attack • may 'pulse' attack, but the 'on'/'off' periods match • variable • agents don't act in unison • may be divided into groups, not all groups active at the same time • different groups may take turns pulsing the victim
Categories • Degree of Automation • Agent Recruitment Strategies • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Victim Type • Specific Application • example: send bogus signature packets to an authentication service • other services on the host may be unaffected • detection difficult • attack volume usually small • host operates normally except for targeted application • may be able to distinguish legit. from attack packets at application level (or maybe not) • even if we can, a defense strategy would need to take into account each application we want to protect • Host • aims to disable all legitimate access to target host • overload or disable network communication subsystem • otherwise cause host to crash, freeze, or reboot • hosts can try to limit their exposure by patching known holes, updating protocols w/DDoS resistant versions • however, by themselves cannot defend against attacks that consume all of their network resources • need upstream help - i.e., a firewall that can recognize and help filter the attack
Victim Type • Resource • any resource critical to the victim (server, router, bottleneck link) • Network • aims to consume all available incoming bandwidth for target network • packet destination can be any host on target network • packet volume, not content, is key • can be easy to detect due to high traffic volume • target network dependant on upstream network for help in defending • even if it could detect & filter attack traffic, entire resources of ingress routers may be consumed doing so • Infrastructure • coordinated targeting of distributed services crucial to the global internet • attacks on root DNS servers, core routers, etc. • from point of view of a single target, may be same as a host-type attack • difference in category is due to simultaneous targeting of multiple instances of some critical service • coordinated defense may be necessary to counter
Categories • Degree of Automation • Agent Recruitment Strategies • Exploited Weakness (to deny service) • Source Address Validity • Attack Rate Dynamics • Possibility of Characterization • Persistence of Agent Set • Victim Type • Impact on the Victim
Self-Recoverable after influx of attack packets ends, life returns to normal w/o human intervention a prompt defense (i.e., recognition & filtering) potentially can make these transparent to legit. clients Human-Recoverable after influx of attack packets ends, rebooting or reconfiguration is required Non-Recoverable inflict permanent damage to hardware conceivable, but none are known Impact on Victim • Disruptive: completely deny access
Impact on Victim • Degrading (subtle, or overt) • only consume a portion of the victims resources • degrade service to legitimate clients • very hard to detect, may go undetected for long period of time • could be very costly: • lost customers due to poor service • money spent on unnecessary equipment upgrades • most existing strategies for dealing w/DDoS have a hard time with this one