420 likes | 567 Views
IS 380. Security Architecture and Design. What are the three pillars of infosec ?. Computer Hardware Architecture . The CPU is the brain of the computer Book says 40m - Intel i7 has 700m+ transistors i9 (1q 2010 ) has 2b. Power 7 1.2b (released Feb 8 ‘10)
E N D
IS 380 Security Architecture and Design
Computer Hardware Architecture • The CPU is the brain of the computer • Book says 40m - Intel i7 has 700m+ transistors i9 (1q 2010) has 2b. • Power 7 1.2b (released Feb 8 ‘10) • The Arithmetic Logic Unit (ALU) does the actual execution of instructions • Communicates with the memory stack for each process
What does this have to do with security? • Compromising the connection between the execution code in memory and data stack allows new instructions to be inserted • Arbitrary code execution is the holy grail of malware. • Not a new approach – Smashing the Stack for Fun and Profit written in 1996.
Buffer overflow P0wn3d
Definitions • Multiprogramming – more than one program in memory • Multitasking – computer handling requests from several process at the same time • Multithreading – applications can have multiple threads. • Multiprocessing – more than one CPU (or core)
Operating system architecture • Process – a program running in memory • Threads – individual instruction sets and associated data for a process. • Created and destroyed as needed • Shares the same resources as the parent process • Memory management – provides protection for the OS, abstraction for programmers, and resource handling for applications.
Mini-lab • Open task manager • List the current processes • What process is using the most memory? • What process has generated the most threads? • What process is using the largest portion of CPU time?
Multitasking • Old and busted: Cooperative multitasking • Process voluntarily released resources • New hotness: Preemptive multitasking • Time sharing/slicing. • States: • Running (executing) • Ready (waiting for CPU time) • Blocked (waiting for input)
Process Isolation • If one process ‘crashes’ other processes continue to run. • Inter process communication – processes can still be allowed to communicate with each other. • Encapsulation/data hiding • Started with windows NT/95
Memory management • Virtual memory – extending RAM to secondary storage • Memory mapping – The system for keeping track of what memory belongs to each process • Base register • Limit register • Processes use logical address rather than physical address
Memory problems • Memory leaks – The process does not return memory it no longer needs • Thrashing – when more time is spent moving data from one area of memory to another than actual processing.
Other types of memory • ROM – Read only • PROM – ROM that can be written to once. • EPROM – erase with UV light • EEPROM – erase with electricity • Flash – BIOS, camera cards, etc. • Cache memory – extremely high performance RAM
Protection rings • Ring 0 – Operating system kernel • Ring 1 –Remaining parts of the OS • Ring 2 – I/O drivers and utilities • Ring 3 – Applications and user activity • Lower numbers are more trusted and provide more access to system resources
Ring security • Monolithic kernel – all kernel activity runs in ring 0 (windows OSX) • Fewer ring switches means faster OS • Ring 0 (kernel) & Ring 3( apps) • Layered operating system – data hiding. More security. • Client/server – as much as possible runs in user mode • microkernel
Virtual machines • Simulates an operating system • Provides an excellent test environment • Will translate instruction sets to the CPU if necessary • Maximize underutilized hardware. Space/cooling/power cost reductions
Additional storage devices • CD/DVD, USB, Bluetooth, Blackberry, MP3 players • Connect to the OS and bypasses perimeter controls • How do we handle them in the security policy?
Trusted Computing Base (TCB) • The programs, instructions, and hardware that we trust • Trusted path – A communications channel between the user/program and the kernel that has been hardened • Trusted shell – A shell that can not be accessed by processes or users outside of the TCB. • TCB defines the security perimeter – the total combination of protection within a computer system
TCB basic functions • Process activation – preparing to run a process by loading instructions and data into memory. • Activation happens when the CPU acts on the process interrupt request. • Execution domain switching • switching from user mode to privileged mode and back again • Memory protection • I/O operations
Reference Model & Security Kernel • Reference Model – mediates all subject /object interactions. • Essentially a model for access control • Security Kernel – enforces the reference model. • Isolates & protects processes controlling access • Used in every access attempt • Tested and verified not to be circumventable.
Security models • Provide a methodology for designing secure systems • Examples on pages 334-356 • All of them use access control and adhere to the principle of least privilege
State Machine Models • At any given instance, the machine is secure. • State transitions are only allowed if they do not compromise the system, including system failures. • If an illegal op – then reboot of freeze to protect data/security.
Bell-LaPadula – confidentiality • Multilevel – users with different security clearances can use the same system. • Three states: • Simple Security Rule – subject at one security level cannot read data at a higher security level. ‘No Read Up’ • * Property Rule - Subject in a given security level cannot write information to a lower security level. ‘No Write Down’ • Strong Star Property Rule – a subject with read and write access can only do so at the same security level
Biba Model – integrity • Data at one integrity level does not flow into a higher integrity level • * integrity axiom – subject cannot write data to an object at higher integrity. ‘No write up’ • Simple integrity axiom – subject cannot read data from a lower integrity level. ‘No Read Down’ • Invocation property – subject cannot request service to subjects of higher integrity. (‘dirty data’ cannot pollute clear data processes)
Clark-Wilson Model – integrity • Users – Active Agents • Transformation procedures (TPs) – read/ write/ modify – Program mediates access. • Constrained data items (CDIs) – manipulated by TPs. High protection • Unconstrained data items (UDIs) – manipulated by users with simple read/write. • Integrity verification procedures (IVPs) – Check consistency of CDIs with reality.
Bibavs. Clark-Wilson • Integrity model goals: • Prevent unauthorized users from making changes. (Biba and CW) • Prevent authorized users from mapping improper modifications (CW separation of duties) • Maintain internal and external consistency (well-formed transaction) (CW IDP)
Information Flow Models • Covert Channel – a way to receive information in an unauthorized manner. • Covert storage channel – trying to write to a file that exists • Covert timing channel – CPU usage • Inference attack – access to some information that allows one to infer information above their clearance level • Noninterference model – actions at one level to not interfere with another level • EAL – the higher the EAL rating the fewer the possible number of covert channels.
Other Models • Lattice Model – upper and lower bounds of rights. • Brewer and Nash Model – ‘Chinese wall’. Prevents conflict of interest by limiting view to one dataset. • Graham-Denning Model – how security and integrity ratings are defined and a way to delegate or transfer rights.
Security Modes • Dedicated – all users can access all data • System High – All users access data: need to know. • Compartmented – access data: need to know and formal approval. • Multilevel – all users, some data: need to know, clearance & formal approval.
Orange Book • Trusted Computer System Evaluation Criteria (TCSEC) • A Verified protection – development, design and evaluation is very stringent. (A1, military) • B Mandatory protection (B3 – highly secure/military) • C Discretionary protection (C2 Commercial, but still weak security: NT4) • D Minimal security (FAIL!) • Only Confidentiality: Bell LaPadula
ITSEC Information Technology Security Evaluation Criteria. • Functionality and assurance. • Developed in Europe
Common Criteria • ISO • Evaluation Assurance Level • EAL1 – Functionally tested • EAL2 – Structurally tested • EAL3 – Methodically tested • EAL4 – Methodically designed, tested, and reviewed. (Windows 2003 SP1, XP SP2) • EAL5 – Semiformally designed and tested • EAL6 – Semiformally verified design and tested • EAL7 – Formally verified design and tested
Common Criteria (cont.) • Functionality and assurance • Protection profiles • Descriptive elements – description of problem solved • Rationale – justify the profile, real-world problem solved, environment, policies, etc. • Functional requirements – protection boundary • Development assurance requirements – requirements must be met during development • Evaluation assurance requirements – type and intensity of evaluation
Notes • Specific version of software • Specific configuration • Certification – Technical review • evaluate and test software, hardware, firmware, design, implementation, procedures, communication. • Make sure you have the right system for the right job. • Accreditation – management’s acceptance of overall functionality & security of system.
Enterprise Security Architecture • Provides technical details for your security policy • Includes network schematics, tools, processes and roles necessary to implement the security policy • Must incorporate business needs as well as legal and regulatory requirements
Open vs. Closed systems • Open – interoperability between vendors • Closed – vendor lock-in. ‘black box’.
Enterprise Security Architecture • Layers of policy, standards, solutions, procedures linked across the enterprise strategically, tactically and operationally. Think planned. Or • The opposite of how most companies work (point solutions cobbled together from different (pet?) projects and forced to interoperate)
Zachman Framework • Direction on how to understand an enterprise in a modular fashion • Aids in understanding the environment. • Organizational, not technical.
Related Threats • Maintenance Hooks/Back Doors • Time of check/time of use – jump in between two tasks & change something • Race conditions – get process to execute out of sequence • Buffer overflows • 5-15 bugs in every 1,000 lines of code. (Carnegie Mellon) • 1 security glitch in 1,000 lines of code (DHS review of 180 open source products) • Windows Vista~ 50,000,000 lines of code.