550 likes | 558 Views
A comprehensive guide on verifying, securing, and analyzing incidents in UNIX environments with expert advice on evidence collection and system isolation. Learn tools, strategies, and methods for effective forensic procedures.
E N D
UNIX Forensics Techniques for Incident ResponseThere is nothing more stimulating than a case where everything goes against you. -Sherlock Holmes
When Faced With a “Situation”… • Verify Incident • Secure scene • Collect evidence • Find clues • Analyze the Unknown
Verification:What do you know? • How are we certain that we even know that an incident occurred? • Verify the Incident! • Where to find information? • Intrusion Logs • Firewall Logs • Interviews • Emails, Network Admin, Users, ISP, etc…
Verification: What do we know? • Three situations • 1. Verification without touching the system • 2. Verification by touching the system minimally. You have a clue or two where to look. • 3. Verification by full analysis of live system to find any evidence that an incident has occurred.
Secure Incident Scene • What exactly does this mean? • Limit the amount of activity on the system to as little as possible • Limit damage by isolating • ONE person perform actions • Limit affecting the crime environment • Record your actions
CATCH-22 • Catch-22: Anything and everything you do will change the state of the system • POWER OFF? Changes it. • Leave it plugged in? Changes it. • Obtaining a backup will change the system • Unplug the network? Changes it. • Even Doing Nothing will ALSO change the state of the system.
Unplug power from system • This method may be the most damaging to effective analysis though there are some benefits as well • Benefits include that you can now move the system to a more secure location and that you can physically remove the hard drive from the system • Cons… you lose evidence of all running processes and memory
Unplug from Network? • Unplug from the network? • Unplug it from the network and plug the distant end into a small hub that is not connected to anything else. • Most systems will write error messages into log files if not on a network. • If you make the computer think it is still on a network, you will succeed in limiting the amount of changes to that system.
Incident Scene Snapshot • Record state of computer • Photos, State of computer, What is on the screen? • What is obviously running on the screen? • Xterm? • X-windows? • Should you port scan the affected computer? • Pros: You can see all active and listening ports • Cons: It affects the computer and some backdoors log how many connections come into them and could tip off the bad guy
First Look • I have no data yet. It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories instead of theories to suit facts. – Sherlock Holmes
Think Hacker… • Some of the following steps are the same steps hackers take to avoid detection by you. • Now we are taking the ball, and we are trying to avoid detection from the hacker • Many of the following methods are methods that hackers use to stay hidden from you as well as you can stay hidden from him
Think Hacker • Some incidents are just the tip of the iceberg. • A good investigator might contemplate watching the hacker enter back into the system to see what he is doing and what he is after. • There is always a purpose • DDOS • Software Piracy • Challenge • Corporate Espionage • Revenge or Political Motives • Try to keep in mind the WHY of the situation.
Equipment Needed • Incident Response Computer(s) • Luggable PC • 2 MAIN IDE Drives (One Windows, One Linux) • Ability to switch between OS. CD-RW • TAPE • SCSI and IDE removable drives • SCSI External Drives (LARGE CAPACITY) • Laptop • External SCSI Drive
Equipment Needed • Incident Response Floppy or CDROM • Use local shared libraries on floppy or cdrom • If rushed, just use good binaries from known clean system • What tools? • At a minimum have netstat, dd, find, nc (netcat), ls, ps, lsof, strings, last, ifconfig, and uptime.
Chicken or Egg • Should you backup the system first? • Should you find the extent of the damage? • Set up in policy for your incident response: • It depends on the system and what you need it for. • To get BEST evidence BACKUP first at the cost of time to get answers • To get FAST answers ANALYZE first at the cost of getting best evidence • Label systems with priority. Some will need answers quicker than your ability to get best evidence.
Backups • There are no rules in backups • Safeback • Encase • dd • dump • cpio • tar
Logical or Physical Backups • Physical Drive Imaging • Grabs entire drive including swap space • BEST evidence • Logical Imaging • Most useful in obtaining backups of SCSI Raid devices since you do not want to physically backup each individual SCSI drive on the RAID • Device memory imaging • Flash cards, memory sticks, PDAs, etc
dd command • It is not a backup command • It is a low-level command designed for copying bits of information from one place to another • dd does not have any knowledge of the “data structure” of the data it is copying • Can copy • Single file • Part of a file • Partition • Logical and Physical Disks • Even copy data from stdin and stdout
dd as a backup tool • Our GOAL: Preserve the state of the evidence • UNIX dd command is most effective
Basic dd operation • # dd if=device of=device bs=blocksize • The if= argument specifies the input file • The of= argument specifies the output file • The bs= argument specifies the block size, or the amount of data that is to be transferred in one I/O operation Note: Changing block size does not affect how the data is physically written to a disk device. It does matter when writing to a tape device as each block becomes a record. Reading the tape you need to tell what the correct block size will be.
Have you cleaned your media? • Before you start imaging anything you need to sure you have a clean copy of that media # dd if=/dev/zero of={device} Typical media devices would include /dev/st0 (tape) /dev/fd0 (floppy) /dev/hda (first ide hard drive) /dev/sda (first scsi hard drive) /dev/sdb (second scsi drive etc)
dd tricks • Using dd to determine unknown block size of a tape • # dd if=device bs=128k of=/tmp/tapetest count=1 • This tells dd to read data, using a block size of 128k until it hits the first record gap. • If 128k is not large enough you will get an error • If it is, the size of the file is also the size of the block size on that tape • If you don’t know what kind of tape it is, you can run the command file on the /tmp/tapetest file. • This will use the /etc/magic file to determine the file type. It can generally guess tar, cpio, and dump file types. If it doesn’t know… it will just come back as data.
I have no media!!! • This dd is great and all, but what if my computer does not have any extra media on it like a tape backup or an extra hard drive? • How do I secure a backup without powering down? • Im glad you asked when you use dd with another tool developed by Hobbit from the L0pht crew from @stake you can become a backup guru!
Network backup using dd, des, and netcat • If no removable media exists on the victim machine, you need to engender a way to copy the physical disk of the system without having to power down • The easiest and least complicated way of doing this is by using two machines both with netcat and dd. • The second machine should have an attached hard drive or tape drive capable of writing you basic evidence. • This should be done before anything else is checked on the system to ensure data preservation
Network backup • # dd if=(local physical disk) | des –e –c –k keyword | nc –w 3 other.host.net 31337 • # nc –l –p 31337 | des –d –c –k keyword | dd of=(local backup media) • DES is used to encrypt and run a checksum across the data sent across the network to ensure that any unwanted parties may not accidentally record that the transfer occurred • You can also throw in GZIP and GUNZIP if you are lacking enough space on your incident response pc that you are sending the raw data to.
Finding Clues • For a long time he remained there, turning over the leaves and dried sticks, gathering what seemed to me to be dust into an envelop and examining with his lens not only the ground, but even the bark of the tree as far as he could reach. -Dr Watson on Sherlock Holmes
Finding Clues • The most important realization one can take in correctly sifting through Gigabytes of data or files is the knowledge that patience and timeliness are virtues to be extolled • Never rush a process designed to be meticulous
Finding Clues • Once backup is done start looking for clues • Be careful to avoid tampering with the system when it is in the middle of a backup. • Even though the emphasis might be to quickly assess the WHAT of a situation, if you try and answer that question without preserving the scene of the crime you will inadvertently erase the evidence you seek
Hacker Profiling • When starting to go through a system begin to build a profile of the hacker. • Skill Level • How well he stays hidden • Is he using a new method? • You can help the community by detailing new attacks to your systems • Again, worry that this might be the tip of the iceberg
System Logging • There are many ways for your system to track a person on it. Remember your purpose is to make it look like you are NOT on to the hacker. • So you need to ensure you do not let him see you run commands that you would ordinarily not run • Remember your HISTORY files • BASH History writes when you LOG OFF only not when you are currently logged onto the machine.
Finding Clues • What are we really looking for? • DATES and TIMES • TROJAN BINARIES • HIDDEN DIRECTORIES • OUT OF PLACE FILES OR SOCKETS • ABNORMAL PROCESSES • We need to find one clue, and once we do, everything else almost always falls into place
Finding Clues • uname -a (system info) • uptime (was it rebooted since hack?) • ps -efl (all process information) • lsof (list of open files) • cat /etc/inetd.conf (checks what is supposed to be running on system) • ifconfig -a (network interface information) • netstat -nav (lists open network sockets) • find / -name *history -print (the following two commands will find any history files and then cat them to the vi session) • find / -name *history -exec cat {} \;
Finding Clues using INODES • How to find Trojans easily? • INODES!!! • Inodes are pointers in each file that tells the OS where to find the file on the logical device • Inodes can be the most telling method to tell you that there is something where it shouldn’t be • #ls –lit | sort | more • This command will output the listing of all the inodes • Look for OUT OF PLACE entries, either very high or very low. Also look for new groupings as the rootkit was probably installed all at the same time.
Finding Clues using BASH • Type #cd .{TAB TAB} • Hit TAB in a directory you are in that you feel might contain a hidden directory. • And while your ls might be trojanized and will not be able to see it your BASH shell will.
Finding Clues • You can execute any number of commands and perform a quick system analysis of the system that was compromised • I recommend that the TCT (The Coroner’s Toolkit) tools be considered to be used at this point as well or as a substitute. But sometimes, it might be overkill if you are not even sure that a compromise has actually occurred or not. www.fish.com/forensics • As you can see, this method can be very useful in grabbing data quickly and being able to save it.
Finding Clues with mac times • What is a MAC Time? • Internal Log File on each file found on most operating systems, even in Windows • Modified – File has been modified • Access – Last File Access Time (even if ONLY ran) • Changed – Last File Attributes Changed (permissions, inode, owner, group, etc)
MAC_DADDY(Your infrared goggles of the file system) • Portable MACTIME Analysis rewritten to run from floppy • No need to write to victim system • STAND-ALONE • PORTABLE • WRITES TO STD-OUT to port over netcat • www.incident-response.org has source
MACTIMES • Benefits • Loadable Kernel Modules • Encryption • Covert Channeling • Rootkits • Compiling Programs • All light up the file system like a Christmas tree and will leave a trace like Infrared does
Tale of Two Binaries • The world is full of obvious things which nobody by any chance ever observes. -Sherlock Holmes
Analyzing Binaries • What if we encounter a file or we find a running process on a system? • We will need to perform analysis on what this file is used for and what it’s implications are concerning the use of the file • Two situations • Binary file found (not currently running as a process) • Binary file found or not found, but there is a running process
General File Analysis • The first command to run on ANY file is actually the command “file” • Classifies the files according to the type of data they contain based on the /etc/magic file • ASCII text • C program text • C-shell commands • Data • Empty • i386 386 executable • Directory
General File Analysis • file will also tell you if the binary file is dynamically or statically linked, with debug output or stripped • Dynamically Linked • Requires the availability of libraries loaded by the operating system • Statically Linked • All functions are included in the binary. But results in a larger executable file. Seen typically in applications like Netscape • Debug Output • Includes Debugging Symls (e.g. variable names, functions, internal symbols, source line numbers, source file information • Stripped • Discards symbols from object files and makes executable much smaller
General File Analysis • If the file is dynamically linked you can find out which libraries are needed by the binary by using the command ldd • ldd • Displays a list of the shared libraries each program requires
General File Analysis • Strings is the seconds command which will display printable characters sequences that are at least 4 characters long and the –a command shows all the strings in the binary file (strings –a) • You can infer a lot from this method alone. • Try comparing this against the output from known binaries to get an idea of what the program might be
General File Analysis • grep, sort, uniq, cut • All useful tools for parsing data • I have routinely used these commands on large files of data searching for pins and needles. I usually find what I'm looking for.
lsof • List of Open Files • An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, a stream or a network file (Internet socket, NFS file or UNIX domain socket.) A specific file or all the files in a file system may be selected by path.
In-Depth Analysis • gdb, ltrace, strace, • The emphasis on these tools is to track the way a program interacts with the computer it is run on • What libraries does it use? • How does it use memory? • What the program is doing to the system
gdb • Gdb (GNU Debugger) • VERY useful program to examine what files do on a system. • Usually implemented by: #gdb filename • Useful commands to type • Info functions • Info variables • Info scope [function] • Break • Run
strace • Digital wiretap on the interactions between a program and the operating system • Displays information on • File access • Network access • Memory access • And other calls it might make • Cons-- It actually has to RUN the program to do this. Danger!
ltrace • ltrace is a program that simply runs the specified command until it exits. It intercepts and records the dynamic library calls which are called by the executed process and the signals which are received by that process. It can also intercept and print the system calls executed by the program.