480 likes | 608 Views
Enabling Worm and Malware Investigation Using Virtualization (Demo and poster this afternoon) Dongyan Xu , Xuxian Jiang CERIAS and Department of Computer Science Purdue University. The Team. Lab FRIENDS Xuxian Jiang (Ph.D. student) Paul Ruth (Ph.D. student) Dongyan Xu (faculty)
E N D
Enabling Worm and Malware Investigation Using Virtualization (Demo and poster this afternoon) Dongyan Xu, Xuxian Jiang CERIAS and Department of Computer Science Purdue University
The Team • Lab FRIENDS • Xuxian Jiang (Ph.D. student) • Paul Ruth (Ph.D. student) • Dongyan Xu (faculty) • CERIAS • Eugene H. Spafford • External Collaboration • Microsoft Research
Our Goal In-depth understanding of increasingly sophisticated worm/malware behavior
Outline • Motivation • An integrated approach • Front-end : Collapsar (Part I) • Back-end : vGround (Part II) • Bringing them together • On-going work Virtualization
The Big Picture Domain B Domain A GRE Proxy ARP Worm Capture Worm Analysis Worm Analysis
Part I Front-End: Collapsar Enabling Worm/Malware Capture * X. Jiang, D. Xu, “Collapsar: a VM-Based Architecture for Network Attack Detention Center”, 13th USENIX Security Symposium (Security’04), 2004.
General Approach • Promise of honeypots • Providing insights into intruders’ motivations, tactics, and tools • Highly concentrated datasets w/ low noise • Low false-positive and false negative rate • Discovering unknown vulnerabilities/exploitations • Example: CERT advisory CA-2002-01 (solaris CDE subprocess control daemon – dtspcd)
Current Honeypot Operation • Individual honeypots • Limited local view of attacks • Federation of distributed honeypots • Deploying honeypots in different networks • Exchanging logs and alerts • Problems • Difficulties in distributed management • Lack of honeypot expertise • Inconsistency in security and management policies • Example: log format, sharing policy, exchange frequency
Our Approach: Collapsar • Based on the HoneyFarm idea of Lance Spitzner • Achieving two (seemingly) conflicting goals • Distributed honeypot presence • Centralized honeypot operation • Key ideas • Leveraging unused IP addresses in each network • Diverting corresponding traffic to a “detention” center (transparently) • Creating VM-based honeypots in the center Virtualization
Collapsar Architecture Production Network Attacker Redirector Production Network Redirector Redirector Front-End Production Network VM-based Honeypot Collapsar Center Management Station Correlation Engine
Comparison with Current Approaches • Overlay-based approach (e.g., NetBait, Domino overlay) • Honeypots deployed in different sites • Logs aggregated from distributed honeypots • Data mining performed on aggregated log information • Key difference: where the attacks take place (on-site vs. off-site)
Comparison with Current Approaches • Sinkhole networking approach (e.g., iSink ) • “Dark” space to monitor Internet abnormality and commotion (e.g. msblaster worms) • Limited interaction for better scalability • Key difference: contiguous large address blocks (vs. scattered addresses)
Comparison with Current Approaches • Low-interaction approach (e.g., honeyd, iSink ) • Highly scalable deployment • Low security risks • Key difference: emulated services (vs. real things) • Less effective to reveal unknown vulnerabilities • Less effective to capture 0-day worms
Collapsar Design • Functional components • Redirector • Collapsar Front-End • Virtual honeypots • Assurance modules • Logging module • Tarpitting module • Correlation module
Collapsar Deployment • Deployed in a local environment for a two-month period in 2003 • Traffic redirected from five networks • Three wired LANs • One wireless LAN • One DSL network • ~ 50 honeypots analyzed so far • Internet worms (MSBlaster, Enbiei, Nachi ) • Interactive intrusions (Apache, Samba) • OS: Windows, Linux, Solaris, FreeBSD
Incident: Apache Honeypot/VMware • Vulnerabilities • Vul 1: Apache (CERT® CA-2002-17) • Vul 2: Ptrace (CERT® VU-6288429) • Time-line • Deployed: 23:44:03pm, 11/24/03 • Compromised: 09:33:55am, 11/25/03 • Attack monitoring • Detailed log • http://www.cs.purdue.edu/homes/jiangx/collapsar
Incident: Windows XP Honeypot/VMware • Vulnerability • RPC DCOM Vul. (Microsoft Security Bulletin MS03-026) • Time-line • Deployed: 22:10:00pm, 11/26/03 • MSBlaster: 00:36:47am, 11/27/03 • Enbiei: 01:48:57am, 11/27/03 • Nachi: 07:03:55am, 11/27/03
Summary (Front-End) • A novel front-end for worm/malware capture • Distributed presence and centralized operation of honeypots • Good potential in attack correlation and log mining • Unique features • Aggregation ofScatteredunused (dark) IP addresses • Off-site(relative to participating networks) attack occurrences and monitoring • Realservices for unknown vulnerability revelation
Part II Back-End: vGround Enabling Worm/Malware Analysis *X. Jiang, D. Xu, H. J. Wang, E. H. Spafford, “Virtual Playgrounds for Worm Behavior Investigation”, 8th International Symposium on Recent Advances in Intrusion Detection (RAID’05), 2005.
Basic Approach • A dedicated testbed • Internet-inna-box (IBM), Blended Threat Lab (Symantec) • DETER • Goal: understanding worm behavior • Static analysis/ execution trace • Reverse Engineering (IDA Pro, GDB, …) • Worm experiment within a limited scale • Result: • Only enabling relatively static analysis within a small scale
The Reality – Worm Threats • Speed, Virulence, & Sophistication of Worms • Flash/Warhol Worms • Polymorphic/Metamorphic Appearances • Zombie Networks (DDoS Attacks, Spam) • What we also need • A high-fidelity,large-scale,livebut safeworm playground
A Worm Playground Picture by Peter Szor, Symantec Corp.
Requirements • Cost & Scalability • How about a topology with 2000+ nodes? • Confinement • In-house private use? • Management & user convenience • Diverse environment requirement • Recovery from damages from a worm experiment • re-installation, re-configuration, and reboot …
Our Approach • vGround • A virtualization-based approach • Virtual Entities: • Leveraging current virtual machine techniques • Designing new virtual networking techniques • User Configurability • Customizing every node (end-hosts/routers) • Enabling flexible experimental topologies Virtualization
A worm playground Virtual Physical A shared infrastructure (e.g. PlanetLab) An Example Run: Internet Worms
Key Virtualization Techniques • Full-System Virtualization • Network Virtualization
Full-System Virtualization • Emerging and New VM Techniques • VMware, Xen, Denali, UML • Supporting for real-world services • DNS, Sendmail, Apache w/ “native” vulnerabilities • Adopted technique: UML • Deployability • Convenience/Resource Efficiency
User-Mode Linux (http://user-mode-linux.sf.net) • System-Call Virtualization • User-Level Implementation UM User Process 1 UM User Process 2 ptrace Guest OS Kernel MMU Device Drivers Host OS Kernel Device Drivers Hardware
Virtual Node 1 New Network Virtualization • Link Layer Virtualization • User-Level Implementation Virtual Node 2 IP-IP Virtual Switch 1 Host OS
User Configurability • Node Customization • System Template • End Node (BIND, Apach, Sendmail, …) • Router (RIP, OSPF, BGP, …) • Firewall (iptables) • Sniffer/IDS (bro, snort) • Topology Customization • Language • Network, Node • Toolkits
Networked Node Network System Template AS2_H1 AS2_H2 AS3_H1 AS1_H2 AS3_H2 AS1_H1 R1 R2 R3 node AS3_H1 { superclass slapper network eth0 { switch AS3_lan1 address 128.12.1.5/24 gateway 128.12.1.250 } } node AS3_H2 { superclass slapper network eth0 { switch AS3_lan1 address 128.12.1.6/24 gateway 128.12.1.250 } } router R3 { superclass router network eth0 { switch AS3_lan1 address 128.12.1.250/24 } network eth1 { switch AS2_AS3 address 128.9.1.1/24 } } Project Planetlab-Worm template slapper { image slapper.ext2 cow enabled startup { /etc/rc.d/init.d/httpd start } } template router { image router.ext2 routing ospf startup { /etc/rc.d/init.d/ospfd start } } router R1 { superclass router network eth0 { switch AS1_lan1 address 128.10.1.250/24 } network eth1 { switch AS1_AS2 address 128.8.1.1/24 } } switch AS1_lan1 { unix_sock sock/as1_lan1 host planetlab6.millennium. berkeley.edu } switch AS1_AS2 { udp_sock 1500 host planetlab6.millennium. berkeley.edu } node AS1_H1 { superclass slapper network eth0 { switch AS1_lan1 address 128.10.1.1/24 gateway 128.10.1.250 } } node AS1_H2 { superclass slapper network eth0 { switch AS1_lan1 address 128.10.1.2/24 gateway 128.10.1.250 } } switch AS2_lan1 { unix_sock sock/as2_lan1 host planetlab1.cs.purdue.edu } switch AS2_AS3 { udp_sock 1500 host planetlab1.cs.purdue.edu } node AS2_H1 { superclass slapper network eth0 { switch AS2_lan1 address 128.11.1.5/24 gateway 128.11.1.250 } } node AS2_H2 { superclass slapper network eth0 { switch AS2_lan1 address 128.11.1.6/24 gateway 128.11.1.250 } } switch AS3_lan1 { unix_sock sock/as3_lan1 host planetlab8.lcs.mit.edu } router R2 { superclass router network eth0 { switch AS2_lan1 address 128.11.1.250/24 } network eth1 { switch AS1_AS2 address 128.8.1.2/24 } network eth2 { switch AS2_AS3 address 128.9.1.2/24 } }
Features • Scalability • 3000virtual hosts in 10physical nodes • Iterative Experiment Convenience • Virtual node generation time: 60 seconds • Boot-strap time: 90 seconds • Tear-down time: 10 seconds • Strict Confinement • High Fidelity
Evaluation • Current Focus • Worm behavior reproduction • Experiments • Probing, exploitation, payloads, and propagation • Further Potentials – on-going work • Routing worms / Stealthy worms • Infrastructure security (BGP)
Experiment Setup • Two Real-World Worms • Lion, Slapper, and their variants • A vGround Topology • 10 virtual networks • 1500 virtual Nodes • 10 physical machines in an ITaP cluster
Evaluation • Target Host Distribution • Detailed Exploitation Steps • Malicious Payloads • Propagation Pattern
Probing: Target Network Selection Lion Worms 13 243 Slapper Worms 80,81 http://www.iana.org/assignments/ipv4-address-space.
Exploitation (Lion) 1: Probing 2: Exploitation! 3: Propagation!
Exploitation (Slapper) 1: Probing 2: Exploitation! 3: Propagation!
Propagation Pattern and Strategy • Address-Sweeping • Randomly choose a Class B address (a.b.0.0) • Sequentially scan hosts a.b.0.0 – a.b.255.255 • Island-Hopping • Local subnet preference
Propagation Pattern and Strategy • Address-Sweeping (Slapper Worm) 192.168.a.b Infected Hosts: 2% Infected Hosts: 5% Infected Hosts: 10%
Propagation Pattern and Strategy • Island-Hopping Infected Hosts: 2% Infected Hosts: 5% Infected Hosts: 10%
Summary (Back-End) • vGround – the back-end • A Virtualization-Based Worm Playground • Properties: • High Fidelity • Strict Confinement • Good Scalability • 3000 Virtual Hosts in 10 Physical Nodes • High Resource Efficiency • Flexible and Efficient Worm Experiment Control
Domain B Domain A GRE Worm Capture Worm Analysis Worm Analysis Combining Collapsar and vGround
Conclusions • An integrated virtualization-based platform for worm and malware investigation • Front-end : Collapsar • Back-end : vGround • Great potential for automatic • Characterization of unknown service vulnerabilities • Generation of 0-day worm signatures • Tracking of worm contaminations
On-going Work • More real-world evaluation • Stealthy worms • Polymorphic worms • Additional capabilities • Collapsar center federation • On-demand honeypot customization • Worm/malware contamination tracking • Automated signature generation
Thank you. Stop by our poster and demo this afternoon! For more information: Email:dxu@cs.purdue.edu URL:http://www.cs.purdue.edu/~dxu Google: “Purdue Collapsar Friends”