340 likes | 497 Views
Week 7 - Monday. CS363. Last time. What did we talk about last time? Malicious code case studies Exam 1 post mortem. Questions?. Project 2. Security Presentation. Omar Mustardo. Code Red. Code Red appeared in 2001 It infected a quarter of a million systems in 9 hours
E N D
Week 7 - Monday CS363
Last time • What did we talk about last time? • Malicious code case studies • Exam 1 post mortem
Security Presentation Omar Mustardo
Code Red • Code Red appeared in 2001 • It infected a quarter of a million systems in 9 hours • It is estimated that it infected 1/8 of the systems that were vulnerable • It exploited a vulnerability by creating a buffer overflow in a DLL in the Microsoft Internet Information Server software • It only worked on systems running an MS web server, but many machines did by default
Versions • The original version of Code Red defaced the website that was being run • Then, it tried to spread to other machines on days 1-19 of a month • Then, it did a distributed denial of service attack on whitehouse.gov on days 20-27 • Later versions attacked random IP addresses • It also installed a trap door so that infected systems could be controlled from the outside
Trapdoors • A trapdoor is a way to access functionality that is not documented • They are often inserted during development for testing purposes • Sometimes a trapdoor is because of error cases that are not correctly checked or handled
Causes of trapdoors • Intentionally created trapdoors can exist in production code when developers: • Forget to remove them • Intentionally leave them in for testing • Intentionally leave them in for maintenance • Intentionally leave them in as a covert means of access to the production system
Salami attacks • I have never heard this term before I read this book • This is the Office Space attack • Steal tiny amounts of money when a cent is rounded in financial transactions • Or, steal a few cents from millions of people • Steal more if the account hasn’t been used much • The rewards can be huge, and these kinds of attacks are hard to catch
The Sony XCP rootkit • A rootkit is malicious code that gives an attacker access to a system as root (a privileged user) and hides from detection • Sony put a program on music CDs called XCP (extended copy protection) which allowed users to listen to the CD on Windows but not rip its contents • It installed itself without the user’s knowledge • It had to have control over Windows and be hard to remove • It would hide the presence of any program starting with the name $sys$, but malicious users could take advantage of that
Privilege escalation • Most programs are supposed to execute with some kind of baseline privileges • Not the high level privileges needed to change system data • Windows Vista and 7 ask you if you want to have privileges escalated • Some times you can be tricked • Symantec needed high level privileges to run Live Update • Unfortunately, it ran some local programs with high privileges • If a malicious user had replaced those local programs with his own, ouch
Keystroke logging • It’s possible to install software that logs all the keystrokes a user enters • If designed correctly, these values come from the keyboard drivers, so all data (including passwords) is visible • There are also hardware keystroke loggers • Most are around $40 • Is your keyboard free from a logger?
Good software development • We only have time for a few slides about good software development • A shame, since good development stops both unintentional and malicious flaws • Development lifecycle: • Specify the system • Design the system • Implement the system • Test the system • Review the system • Document the system • Manage the system • Maintain the system
Modularity • A goal of software engineering should be to develop software robust independent components • Modularization • Components should meet the following criteria: • Single-purpose: Perform one function • Small: Short enough to be understandable by a single human • Simple: Simple enough to be understandable by a single human • Independent: Isolated from other modules
Encapsulation • Components should hide their implementation details • Only the smallest number of public methods should be kept to allow them to interact with other components • This information hiding model is thought of as a black box • For both components and programs, one reason for encapsulation is mutual suspicion • We always assume that other code is malicious or badly written
Testing • Unit testing tests each component separately in a controlled environment • Integration testing verifies that the individual components work when you put them together • Function and performance tests sees if a system performs according to specification • Acceptance testing give the customer a chance to test the product you have created • The final installation testing checks the product in its actual use environment
Testing methodologies • Regression testing is done when you fix a bug or add a feature • We have to make sure that everything that used to work still works after the change • Black-box testing uses input values to test for expected output values, ignoring internals of the system • White-box or clear box testing uses knowledge of the system to design tests that are likely to find bugs • You can only prove there are bugs. It is impossible to proves that aren’t bugs.
Standards • If you program for a living, you will probably be held to standards • Standards cannot guarantee bug-free code, but they can help
OS security • The OS has to enforce much of the computer security we want • Multiple processes are running at the same time • We want protection for: • Memory • Hard disks • I/O devices like printers • Sharable programs • Networks • Any other data that can be shared
Separation • OS security is fundamentally based on separation • Physical separation: Different processes use different physical objects • Temporal separation: Processes with different security requirements are executed at different times • Logical separation: Programs cannot access data or resources outside of permitted areas • Cryptographic separation: Processes conceal their data so that it is unintelligible
Memory protection • Protecting memory is one of the most fundamental protections an OS can give • All data and operations for a program are in memory • Most I/O accesses are done by writing memory to various locations • Techniques for memory protection • Fence • Base/bounds registers • Tagged architectures • Segmentation • Paging
Fence • A fence can be a predefined or variable memory location • Everything below the fence is for the OS • If a program ever tries to access memory below the fence, it either fails or is shut down • As with many memory schemes, code needs to be relocatableso that the program is written as if it starts at memory location 0, but actually can be offset to an appropriate location OS Memory Fence User Program Memory
Base/bounds registers • In modern systems, many user programs run at the same time • We can extend the idea of a fence to two registers for each program • The base register gives the lowest legal address for a particular user program • The bounds register gives the highest legal address for a particular user program OS Memory Base A Program A Memory Bounds A Program B Memory Program C Memory
Tagged architectures • The idea of base and bounds registers can be extended so that there are separate ranges for the program code and for its data • It is possible to allow data for some users to be globally readable or writable • But this makes data protection all or nothing • Tagged architectures allow every byte (or perhaps defined groups of bytes) to marked read only, read/write, or execute only • Only a few architectures have used this model because of the extra overhead involved
Segmentation Programmer’s View OS View • Segmentation has been implemented on many processors including most x86 compatibles • A program sets up several segments such as code, data, and constant data • Writing to code is usually illegal • Other rules can be made for other segments • A memory lookup is both a segment identifier and an offset within that segment • For performance reasons, the OS can put these segments wherever it wants and do lookups • Segments can be put on secondary storage if they are not currently in use • The programmer sees a solid block of memory Code Constant Data Constant Data Data Data Other users have their own segments Code
Paging Programmer’s View OS View • Paging is a very common way of managing memory • A program is divided up into equal-sized pieces called pages • An address is page number and an offset • Paging doesn’t have the fragmentation programs that segmentation does • It also doesn’t specify different protection levels • Paging and segmentation can be combined to give protection levels Page 0 Page 1 Page 2 Page 2 Page 3 Page 0 Other users have their own pages Page 1 Page 3
Next time… • More OS security • Access control • Authentication • Cody Kumppresents
Reminders • Read Sections 4.1 through 4.4 • Start working on Project 2