200 likes | 211 Views
This paper discusses the architecture, implementation, and results of the Potemkin Virtual Honeyfarm, a large-scale high-interaction honeypot system. It presents a novel approach to containment and demonstrates improved efficiency in simulation.
E N D
Scalability, Fidelity and Containment in the Potemkin Virtual Honeyfarm • Authors: • Michael Vrable, Justin Ma, Jay chen, David Moore, Erik Vandekieft, Alex C. Snoeren, Geoffrey M. Voelker and Stefan Savage • University of California, San Diego Proceedings of the ACM Symposium on Operating System Principles (SOSP), Brighton, UK, October 2005 Presented By: Dan DeBlasio for CAP 6133, Spring 2008
Outline • Architectural Overview • Implementation • Results • Commentary/Conclusion
Overview • when a packet comes in, routed it to an existing VM, else makes a new one with that address • makes a copy of a template system to cary out interaction • only keeps track of differences from template • contains infection data to keep it from infecting others
Yes Yes Outbound Packet Safe Create VM IP Already A VM To Internet No Forward Packet VM No Honeyfarm Architecture Packet Comes In
Containment • until now only seen low interaction honeyfarms • how to keep honeyfarm from becoming worm incubator • relies on gateway router to “scrub” the outgoing traffic • emulates destination addresses if needed on internal network
Gateway Router • incoming packets to inactive IP are sent to a non-overloaded physical server so it can be emulated • choice is random, or calculated • packets directed to an active IP pass to the machine where a VM has been created • filters out “known” attacks so they don’t over-emulate the same worm
Gateway Router • must prevent a worm or outbreak from starving honeyfarm of resources due to reflection • decides when a VM should be reclaimed due to inactivity and not being successfully compromised • also decides when a compromised machine should be reclaimed to reallocate resources
Virtual Machine Monitor • at startup the system boots guest OS, and lets it warm up and start server services • takes snapshot if system (like hibernate) • use this snapshot to create new VMs on the fly • leaves it running so it will update memory
packets flushed from queue “okay” forward to cloned VM “change to IP A” “clone VM” cloned VM’s response “okay” passed to clone manager’s queue New packet for address A VMM - Flash Cloning Domain Network Stack Clone Manager Xen Management Daemon queues packets until clone is ready Cloned VM time
Delta Virtualization • At copy, each VM maps all it memory to the reference VM • on write a private copy is stored in its own memory • memory sharing to further reduce the amount of memory needed
Results ~216 /16 == Class B ~65,536 addresses
Contributions • Show that you can make a large scale high interaction honeyfarm • gives proof (in simulation) that it can improve efficiency of a honeyfarm
Weaknesses • only tested in simulation • only used linux based server VMs • only tried at a /16 level
Improvements • use windows PC as well as Linux Servers • use honeyd type first response so that you don’t have to clone for scanning packets