E N D
Security Andrew Whitaker CSE451
Motivation: Internet Worms “On July 19th, 2001, a self-propagating program, orworm, was released into the Internet. The worm, dubbed Code-Red v2, probed random Internet hosts for a documented vulnerability in the popular Microsoft IIS Web server. As susceptible hosts were infected with the worm, they too attempted to subvert other hosts, dramatically increasing the incidence of the infection. Over fourteen hours, the worm infected almost 360,000 hosts, reaching an incidence of 2,000 hosts per minute before peaking [1]. The direct costs of recovering from this epidemic (including subsequent strains of Code-Red) have been estimated in excess of $2.6 billion [2]” Source: Internet Quarantine: Requirements for Self-Propagating Code David Moore, Colleen Shannon, Geoffrey M. Voelker, Stefan Savage
Motivation • What’s happening here?
Why Has Security Gotten So Bad? • The Internet • Extensibility • Complexity • An enormous trusted computing base • Grandma
Outline • Protection • Defining what is allowed • Security • Enforcing a protection policy • In the face of adversaries
Protection systems • Protection: controlling who gets access to what • Key concepts: • Objects (the “what”) • Principles (the “who”, user or program) • Actions: (the “how”, or operations) • A protection system dictates whether an action is allowed for a principle, object • e.g., you can read or write your files, but others cannot • e.g., your can read /etc/motd but cannot write
Model for Representing Protection • Protection can be modeled with a matrix • Two implementation strategies • Access control lists (rights stored with objects) • Capabilities (rights stored with principals) objects principals capability ACL
More on UNIX-style protection • Not a full access-control list • Can only specify owner, group, world permissions • Owner has the power to change access control • This is called discretionary access control • There is an all-power “super-user” called root • Root acts as an owner for all files • All programs run as a particular user • The user must have “execute” privilege • A program can change its user by invoking a setuid command
Setuid • Each process has three user IDs: • Real: the user who invoked the process • Effective: the user for access control • Saved: a previous user ID • Setuid changes the effective user • Which is the one that matters for security • Two ways to invoke • setuid() system call • setuid bit • Changes the effective user to the file owner
Setuid Root • Many programs run as “setuid root” • Can be invoked by anybody • But, run as root • Example: passwd • Passwords stored in a file • Users do not have access to this file • But, they need the ability to change a password • setuid root is extremely dangerous • A compromised setuid program can basically do anything
Principle of Least Privilege • Each program should be given the minimum privilege to accomplish its task • setuid root is a flagrant violation of this • Least privilege for passwd: read and write the password file • Privilege afforded by root: modify any file in the system; inspect kernel memory; access any I/O device; etc.
Why Can’t We Enforce Least Privilege? • The policy is too complex • Which files can a web browser access? • Which files can a web server access? • The mechanisms are not sufficient • For example, no way to control network accesses • In the end, usability wins out over security
Research: System Call Policies • Observation: UNIX file access control is not sufficient • Approach: Add a policy language for inspecting system calls • In principle, this allows us to completely quarantine an application
Example System Call Policy Language Policy: /usr/sbin/named, Emulation: native native-sysctl: permit native-accept: permit native-bind: sockaddr match "inet-*:53" then permit native-break: permit native-chdir: filename eq "/" then permit native-chdir: filename eq "/namedb" then permit native-chroot: filename eq "/var/named" then permit native-close: permit native-connect: sockaddr eq "/dev/log" then permit Source: Improving Host Security with System Call Policies Niels Provos Open question: where do the policies come from?
Other Security Principles • Least Privilege • Keep it simple • Fail-safe defaults • Use the best (or better) languages and tools • Think about the end-user
Principle of Fail-Safe Defaults • Policy should list what is allowed, not what is denied • security configuration should deny all access by default • allow only that which has been explicitly permitted • oversights show up as “false negatives” • users will quickly complain • Counterexample: Irix OS • shipped with “xhost +” by default • Allows the world to open windows on your screen and grab the keystrokes you type
This is a potential buffer overflow The attacker can write arbitrary data onto the stack Use Better Languages and Tools • Many software vulnerabilities stem from the use of unsafe languages (mostly C) int main (int argc, char* argv[]) { char buffer[256]; strcpy(buffer,argv[1]); return 0; }
Think About the End User • Many security problems stem from poor user interface design • Famous paper: “Why Johnny Can’t Encrypt” found that ordinary users could not setup encrypted email • Ongoing research: graphical passwords, biometrics, etc.