350 likes | 465 Views
September 14, 2006 Lecture 3. IS 2150 / TEL 2810 Introduction to Security. What is a secure system?. A simple definition A secure system doesn’t allow violations of a security policy Alternative view: based on distribution of rights to the subjects
E N D
September 14, 2006Lecture 3 IS 2150 / TEL 2810 Introduction to Security
What is a secure system? • A simple definition • A secure system doesn’t allow violations of a security policy • Alternative view: based on distribution of rights to the subjects • Leakage of rights: (unsafe with respect to right r) • Assume that A representing a secure state does not contain a right r in any element of A. • A right r is said to be leaked, if a sequence of operations/commands adds r to an element of A, which did not contain r
What is a secure system? • Safety of a system with initial protection state Xo • Safe with respect to r: System is safe with respect to r if r can never be leaked • Else it is called unsafe with respect to right r.
Safety Problem: formally • Given • initial state X0 = (S0, O0, A0) • Set of primitive commands c • r is not in A0[s, o] • Can we reach a state Xn where • s,o such that An[s,o] includes a right r not in A0[s,o]? • If so, the system is not safe • But is “safe” secure?
Decidability Results(Harrison, Ruzzo, Ullman) • Theorem: • Given a system where each command consists of a single primitive command (mono-operational), there exists an algorithm that will determine if a protection system with initial state X0 is safe with respect to right r.
Decidability Results(Harrison, Ruzzo, Ullman) • Proof: determine minimum commands k to leak • Delete/destroy: Can’t leak (or be detected) • Create/enter: new subjects/objects “equal”, so treat all new subjects as one • No test for absence • Tests on A[s1, o1] and A[s2, o2] have same result as the same tests on A[s1, o1] and A[s1, o2] = A[s1, o2] A[s2, o2] • If n rights leak possible, must be able to leak k= n(|S0|+1)(|O0|+1)+1 commands • Enumerate all possible states to decide
Decidability Results(Harrison, Ruzzo, Ullman) • It is undecidable if a given state of a given protection system is safe for a given generic right • For proof – need to know Turing machines and halting problem
What is the implication? • Safety decidable for some models • Are they practical? • Safety only works if maximum rights known in advance • Policy must specify all rights someone could get, not just what they have • Where might this make sense?
Back to HRU:Fundamental questions • How can we determine that a system is secure? • Need to define what we mean by a system being “secure” • Is there a generic algorithm that allows us to determine whether a computer system is secure?
Turing Machine & halting problem • The halting problem: • Given a description of an algorithm and a description of its initial arguments, determine whether the algorithm, when executed with these arguments, ever halts (the alternative is that it runs forever without halting).
Turing Machine & halting problem • Reduce TM to Safety problem • If Safety problem is decidable then it implies that TM halts (for all inputs) – showing that the halting problem is decidable (contradiction)
Turing Machine • TM is an abstract model of computer • Alan Turing in 1936 • TM consists of • A tape divided into cells; infinite in one direction • A set of tape symbols M • M contains a special blank symbol b • A set of states K • A head that can read and write symbols • An action table that tells the machine • What symbol to write • How to move the head (‘L’ for left and ‘R’ for right) • What is the next state
Turing Machine • The action table describes the transition function • Transition function d(k, m) = (k, m, L): • in state k, symbol m on tape location is replaced by symbol m, • head moves to left one square, and TM enters state k • Halting state is qf • TM halts when it enters this state
Turing Machine Let d(k, C) = (k1, X, R) where k1 is the next state 1 2 3 4 1 2 3 4 A A B C B X D … D … head head Let d(k1, D) = (k2, Y, L) where k2 is the next state Current state is k Current symbol is C 1 2 3 4 A B ? ? ? … ? head
General Safety Problem • Theorem: It is undecidable if a given state of a given protection system is safe for a given generic right • Proof: Reduce TM to safety problem • Symbols, States rights • Tape cell subject • Cell si has Asi has A rights on itself • Cell sksk has end rights on itself • State p, head at sisi has p rights on itself • Distinguished Right own: • si owns si+1 for 1 ≤i < k
Command Mapping(Left move) d(k, C) = (k1, X, L) If head is not in leftmost command ck,C(si, si-1) ifownina[si-1, si] andkina[si, si] and C ina[si, si] then deletekfromA[si,si]; delete C fromA[si,si]; enter X intoA[si,si]; enterk1intoA[si-1, si-1]; End
Mapping 1 2 3 4 1 2 4 s1 s2 s3 s4 A B C D … s1 A own head s2 B own s3 C k own Current state is k Current symbol is C s4 D end
Mapping (Left Move) 1 2 3 4 1 2 4 s1 s2 s3 s4 A B X D … s1 A own head s2 B k1 own s3 X own After d(k, C) = (k1, X, L) where k is the current state and k1 the next state s4 D end If head is in leftmost both si, si-1are s1
Command Mapping(Right move) d(k, C) = (k1, X, R) command ck,C(si, si+1) ifownina[si, si+1] andkina[si, si] and C ina[si, si] then deletekfromA[si,si]; delete C fromA[si,si]; enter X intoA[si,si]; enterk1intoA[si+1, si+1]; end
Mapping 1 2 3 4 1 2 4 s1 s2 s3 s4 A B C D … s1 A own head s2 B own s3 C k own Current state is k Current symbol is C s4 D end
Mapping 1 2 3 4 1 2 4 s1 s2 s3 s4 A B X D … s1 A own head s2 B own s3 X own After d(k, C) = (k1, X, R) where k is the current state and k1 the next state s4 D k1 end
Command Mapping(Rightmost move) d(k1, D) = (k2, Y, R) at end becomes command crightmostk,C(si,si+1) ifendina[si,si] andk1ina[si,si] and D ina[si,si] then deleteendfroma[si,si]; create subjectsi+1; enterown into a[si,si+1]; enterendintoa[si+1, si+1]; deletek1froma[si,si]; delete D froma[si,si]; enter Y intoa[si,si]; enterk2intoA[si,si]; end
Mapping 1 2 3 4 1 2 4 s1 s2 s3 s4 s5 A B X Y s1 A own head s2 B own s3 X own After d(k1, D) = (k2, Y, R) where k1 is the current state and k2 the next state s4 Y own s5 bk2 end
Rest of Proof • Protection system exactly simulates a TM • Exactly 1 end right in ACM • Only 1 right corresponds to a state • Thus, at most 1 applicable command in each configuration of the TM • If TM enters state qf, then right has leaked • If safety question decidable, then represent TM as above and determine if qf leaks • Leaks halting state halting state in the matrix Halting state reached • Conclusion: safety question undecidable
Security Policy • Defines what it means for a system to be secure • Formally: Partitions a system into • Set of secure (authorized) states • Set of non-secure (unauthorized) states • Secure system is one that • Starts in authorized state • Cannot enter unauthorized state
Secure System - Example Unauthorized states A B C D Authorized states • Is this Finite State Machine Secure? • A is start state ? • B is start state ? • C is start state ? • How can this be made secure if not? • Suppose A, B, and C are authorized states ?
Additional Definitions: • Security breach: system enters an unauthorized state • Let X be a set of entities, I be information. • I has confidentiality with respect to X if no member of X can obtain information on I • I has integrity with respect to X if all members of X trust I • Trust I, its conveyance and protection (data integrity) • I maybe origin information or an identity (authentication) • I is a resource – its integrity implies it functions as it should (assurance) • I has availability with respect to X if all members of X can access I • Time limits (quality of service
Confidentiality Policy • Also known as information flow • Transfer of rights • Transfer of information without transfer of rights • Temporal context • Model often depends on trust • Parts of system where information could flow • Trusted entity must participate to enable flow • Highly developed in Military/Government
Integrity Policy • Defines how information can be altered • Entities allowed to alter data • Conditions under which data can be altered • Limits to change of data • Examples: • Purchase over $1000 requires signature • Check over $10,000 must be approved by one person and cashed by another • Separation of duties : for preventing fraud • Highly developed in commercial world
Transaction-oriented Integrity • Begin in consistent state • “Consistent” defined by specification • Perform series of actions (transaction) • Actions cannot be interrupted • If actions complete, system in consistent state • If actions do not complete, system reverts to beginning (consistent) state
Trust • Theories and mechanisms rest on some trust assumptions • Administrator installs patch • Trusts patch came from vendor, not tampered with in transit • Trusts vendor tested patch thoroughly • Trusts vendor’s test environment corresponds to local environment • Trusts patch is installed correctly
Trust in Formal Verification • Formal verification provides a formal mathematical proof that given input i, program P produces output o as specified • Suppose a security-related program S formally verified to work with operating system O • What are the assumptions?
Trust in Formal Methods • Proof has no errors • Bugs in automated theorem provers • Preconditions hold in environment in which S is to be used • S transformed into executable S’ whose actions follow source code • Compiler bugs, linker/loader/library problems • Hardware executes S’ as intended • Hardware bugs
Security Mechanism • Policy describes what is allowed • Mechanism • Is an entity/procedure that enforces (part of) policy • Example Policy: Students should not copy homework • Mechanism: Disallow access to files owned by other users • Does mechanism enforce policy?
Common Mechanisms:Access Control • Discretionary Access Control (DAC) • Owner determines access rights • Typically identity-based access control: Owner specifies other users who have access • Mandatory Access Control (MAC) • Rules specify granting of access • Also called rule-based access control • Originator Controlled Access Control (ORCON) • Originator controls access • Originator need not be owner! • Role Based Access Control (RBAC) • Identity governed by role user assumes