1 / 49

Lecture 3

Lecture 3. Security Model. Why Security Models?. A security model is a formal description of a security policy Models are used in high assurance security evaluations (smart cards are currently a fruitful area of application)

cece
Download Presentation

Lecture 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 3 Security Model

  2. Why Security Models? • A security model is a formal description of a security policy • Models are used in high assurance security evaluations (smart cards are currently a fruitful area of application) • Models are important historic milestones in computer security (e.g. Bell-LaPadula) • The models presented today are not recipes for security but can be a starting point when you have to define a model yourself.

  3. Contents • The Bell-LaPadula (BLP) model • Case Study: Multics interpretation of BLP • Other models: • Changing access rights: Harrison-Ruzo-Ullman, Chinese Wall • Integrity: Biba, Clark-Wilson • Perfection: information flow and non-interference models

  4. State Machine Models (Automata) • Abstract models that record relevant features, like the security of a computer system, in their state. • States change at discrete points in time, e.g. triggered by a clock or an input event. • State machine models have numerous applications in computer science: processor design, programming languages, or security. Examples: • Switch: two states, ‘on’ and ‘off’ • Ticket machine: inputs: ticket requests, coins; state: ticket requested, money to be paid; output: ticket, change • Microprocessors: state: register contents, inputs: machine instructions

  5. Basic Security Theorems • To design a secure system with the help of state machine models, • define state set so that it captures ‘security’ • check that all state transitions starting in a ‘secure’ state yield a ‘secure’ state • check that initial state of the system is ‘secure’ • Security is then preserved by all state transitions. The system will always be ‘secure’. • This Basic Security Theorem has been derived without any definition of ‘security’!

  6. Bell-LaPadula Model (BLP) • Subjects and objects are labeled with security levels that form a partial ordering • The policy: No information flow from ‘high’ security levels down to ‘low’ security level (confidentiality) • BLP only considers information flows that occur when a subject observes or alters an object • BLP is a state machine model • Access permissions are defined through an access control matrix and through security levels

  7. Constructing the state set 1. All current access operations: • an access operation is described by a triple (s,o,a), s S, o O, a A E.g.: (Alice, fun.com, read) • The set of all current access operations is an element of P(SOA) E.g.: {(Alice, fun.com, read), (Bob, fun.com, write), …}

  8. Constructing the state set 2. Current assignment of security levels: • maximal security level: fS: S L (L … labels) • current security level: fC: S L • classification: fo: O L • The security level of a user is the user’s clearance • The current security level allows subjects to be down-graded temporarily (more later). • F  LS LSLO is the set of security level assignments; f = (fS, fC, fO)denotes an element of F

  9. Constructing the state set 3. Current permissions : • defined by the access control matrixM • M is the set of access control matrices • The state set of BLP: V = BM F • B is our shorthand for P(SOA) • b denotes a set of current access operations • a state is denoted by (b,M,f)

  10. BLP Policies • Discretionary Security Property (ds-property): Access must be permitted by the access control matrix: (s,o,a)Mso • Simple Security (ss)-Property (no read-up): fS(s) fO(o) if access is in observe mode • The ss-property is a familiar policy for controlling access to classified paper documents

  11. On subjects • In the ss-property, subjects act as observers • In a computer system, subjects are processes and have no memory of their own • Subjects have access to memory objects • Subjects can act as channels by reading one memory object and transferring information to another memory object • In this way, data may be declassified improperly

  12. Star Property • -Property (star property)(no write-down): fC(s) fO(o) if access is in alter mode; also, if subject s has access to an object o in alter mode, then fO(o’) fO(o) for all objects o’ accessed by s in observe mode. • The very first version of BLP did not have the -property. • The ss-property and -property are called the mandatory BLP policies.

  13. high observe blocked medium alter observe current security level downgraded low Blocking channels via subjects

  14. No Write-Down • The -property prevents a high level subject from sending legitimate messages to low level subjects • Two ways to escape from this restriction: • Temporarily downgrade high level subject; hence the current security level fC; BLP subjects have no memory of their own! • Exempt trusted subjects from the -property. • We redefine the -property and demand it only for subjects that are not trusted.

  15. Trusted Subjects Trusted subjects may violate security policies! Distinguish between trusted subjects and trustworthy subjects.

  16. Basic Security Theorem • A state is secure, if all current access tuples (s,o,a) are permitted by the ss-, -, and ds-properties. • A state transition is secure if it goes from a secure state to a secure state. • This Basic Security Theorem has nothing to do with the BLP security policies, only with state machine modeling. Basic Security Theorem: If the initial state of a system is secure and if all state transitions are secure, then the system will always be secure.

  17. Tranquility • McLean: construct system with operation downgrade: • downgrades all subjects and objects to system low • enters all access rights in all positions of the access control matrix. • The resulting state is secure according to BLP. • Should such a system be regarded secure? • McLean: no, everybody is allowed to do everything • Bell: yes, ifdowngradewas part of the system specification • BLP has no policies for changing access control data • BLP assumes tranquility: access control data do not change

  18. Covert Channels • Communications channels that allow transfer of information in a manner that violates the system’s security policy. • Storage channels: e.g. through operating system messages, file names, etc. • Timing channels: e.g. through monitoring system performance • Orange Book: 100 bits per second is ‘high’ bandwidth for storage channels, no upper limit on timing channels. • The bandwidth of some covert channels can be reduced by reducing the performance of the system. • Covert channels are not detected by BLP modeling.

  19. Applying BLP • Multics was designed to be a secure, reliable, etc multi-user O/S. Multics became too cumbersome for some project members, who then created something much simpler, viz Unix. • The history of the two systems is ample evidence for the relation between commercial success and the balance between usability and security. • We will sketch how the Bell-LaPadula model can be used in the design of a secure O/S. • The inductive definition of security in BLP makes it relatively easy to build a secure system. To show that Multics is secure, one has to find a description of the O/S, which is consistent with BLP and verify that all state transitions are secure.

  20. segment descriptor word Segment_id pointer r: on e: off w: on Multics Interpretation of BLP • Subjects in Multics are processes. Each subject has a descriptor segment containing information about the process • For each object the subject currently has access to, there is a segment descriptor word (SDW) in the subject’s descriptor segment. The SDW contains the name of the object, a pointer to the object, and flags for read, execute, and write access. • The security level of subjects are kept in a process level table and a current-level table. The active segment table records all active processes. Only active processes have access to an object.

  21. Compatibility • Objects are memory segments, I/O devices, ... • Objects are organized hierarchically in a directory tree. Directories are again segments. Information about an object, like its security level or its access control list (ACL), are kept in the object’s parent directory. To change an object’s access control parameters and to create or delete an object requires write or append access rights to the parent directory. • To access an object, a process has to traverse the directory tree from the root directory to the target object. If any directory in this path is not accessible to the process, the target object is not accessible either. • Compatibility: The security level of an object must always dominate the security level of its parent directory.

  22. BLP State in Multics Components of the BLP state in Multics: • Current access b: stored in the SDWs in the descriptor segments of the active processes; the active processes are found in the active segment table. The descriptor segment base register (DSBR) points to the descriptor segment of the current process. • Level function f: security levels of the subjects are stored in the process level table and the current-level table; the security level of an object is stored in its parent directory. • Access control matrix M: represented by the ACLs; for each object, the ACL is stored in its parent directory; each ACL entry specifies a process and the access rights the process has on that object.

  23. current pro.Lc segment-id DSBR segment-id ptr current process w:off r:on e:off object subject descriptor segment of current process LC LO? segment-id LO current-level table parent directory

  24. MAC in Multics • Multics access attributes for data segments: • read r • execute e,r • read and write w • write a • The -property for Multics: for any SDW in the descriptor segment of an active process, the current level of the process • dominates the level of the segment if the read or execute flags are on and the write flag is off, • Is dominated by the level of the segment if the read flag is off and the write flag is on, • is equal to the level of the segment if the read flag is on and the write flag is on.

  25. Kernel Primitives • If the state transitions in an abstract model of the Multics kernel preserve the BLP security policies, then the BLP Basic Security Theorem proves the ‘security’ of Multics. • Example: the get-read primitive takes as its parameters a process-id and a segment-id. The O/S has to check whether • the ACL of segment-id, stored in the segment's parent directory, lists process-id with read permission, • the security level of process-id dominates the security level of segment-id, • process-id is a trusted subject, or the current security level of process-id dominates the security level of segment-id. • If all three conditions are met, access is permitted and a SDW in the descriptor segment of process-id is added/updated.

  26. More Kernel Primitives • release-read: release an object; the read flag in the corresponding SDW is turned off; if thereafter no flag is on, the SDW is removed from the descriptor segment. • give-read: grant read access to another process. • rescind-read: withdraw a read permission given to another process. • create-object: create an object; the O/S has to check that write access on the object's directory segment is permitted and that the security level of the segment dominates the security level of the process. • change-subject-current-security-level: the O/S has to check that no security violations are created by the change; this kernel primitive, as well as the primitive change-object-security-level were not intended for implementation (tranquility).

  27. Aspects of BLP • Descriptive capability of its state machine model: can be used for other properties, e.g. for integrity • Its access control structures, access control matrix and security levels: can be replaced by other structures, e.g. by SSOto capture ‘delegation’ • The actual security policies, the ss-,  -, and ds-properties: can be replaced by other policies (see Biba model) • A specific application of BLP, e.g. its Multics interpretation

  28. Limitations of BLP • Restricted to confidentiality • No policies for changing access rights; a general and complete downgrade is secure; BLP is intended for systems with static security levels • BLP contains covert channels: a low subject can detect the existence of high objects when it is denied access • Sometimes, it is not sufficient to hide only the contents of objects. Also their existence may have to be hidden.

  29. Harrison-Ruzo-Ullman Model • BLP has no policies for changing access rights or for the creation and deletion of subjects and objects. • The Harrison-Ruzzo-Ullman (HRU) model defines authorization systems that address these issues. • The components of the HRU model: • set of subjects S • set of objects O • set of access rightsR • access matrix M = (Mso)sS,oO : entry Mso is a subset of R defining the rights subject s has on object o

  30. Six primitive operations for manipulating subjects, objects, and the access matrix: enter rinto Mso deleterfromMso create subject s delete subject s create object o delete object o Commands in HRU model (examples): command create_file(s,f) create f enter o into Ms,f enter r into Ms,f enter w into Ms,f end command grant_read(s,p,f) if o in Ms,f then enter r in Mp,f end Primitive Operations in HRU

  31. “Leaking” of rights in HRU • The access matrix describes the state of the system; commands effect changes in the access matrix. • HRU can model policies for allocating access rights. To verify compliance with a given policy, you have to check that no undesirable access rights can be granted. • An access matrix M is said to leak the right rif there exists a command c that adds rinto a position of the access matrix that previously did not contain r. • M is safe with respect to the right rif no sequence of commands can transform M into a state that leaks r. • Do not expect the meaning of ‘leak’ and ‘safe’ to match your own intuition.

  32. Safety Properties of HRU • The safety problem cannot be tackled in its full generality. For restricted models, the chances of success are better. Theorem. Given an access matrix M and a right r, verifying the safety of M with respect to r is undecidable. • Mono-operational commands contain a single operation: Theorem. Given a mono-operational authorization system, an access matrix M, and a right r, verifying the safety of M with respect to r is decidable. • With two operations per command, the safety problem is undecidable. Limiting the size of the authorization system is another way of making the safety problem tractable. Theorem. The safety problem for arbitrary authorization systems is decidable if the number of subjects is finite.

  33. HRU – Summary • Do not memorize the details of HRU • Do not memorize the individual undecidibility theorems for the HRU variants • Remember the important message: The more expressive the security model, the more difficult it is to verify security • You don’t have to know the basics of computational complexity theory for this course but it helps to appreciate the challenges in formally verifying security

  34. Chinese Wall Model • In financial institutions analysts deal with a number of clients and have to avoid conflicts of interest. • http://www.taxopedia.com/terms/c/chinesewall.asp • The model has the following components • subjects: analysts • objects: data item for a single client • company datasets: y:O C gives for each object its company dataset • conflict of interest classes: companies that are competitors; x: O  P(C) gives for each object o the companies with a conflict of interest on o • ‘labels’: company dataset + conflict of interest class • sanitized information: no access restrictions

  35. Chinese Wall Model - Policies • Simple Security Property:Access is only granted if the object requested • is in the same company dataset as an object already accessed by that subject • belongs not to any of the conflict of interest classes of objects already accessed by that subject • Formally: • N = (Nso)sS,oO , Boolean matrix, Nso = true if s has accessed o • ss-property: subject s gets access to object o only if for all objects o’with Nso’ = true, y(o) x(o’) or y(o)=y(o’)

  36. Chinese Wall Model:  -Property • Indirect information flow: A and B are competitors having accounts with the same Bank. Analyst_A, dealing with A and the Bank, updates the Bank portfolio with sensitive information about A. Analyst_B, dealing with B and the Bank, now has access to information about a competitor. •  - Property: A subject s is permitted write access to an object only if s has no read access to any object o’, which is in a different company dataset and is unsanitized. • subject s gets write access to object o only if s has no read access to an object o’ with y(o)  y(o’) or x(o’)  {} • Access rights of subjects change dynamically with every access operation.

  37. Biba Model • State machine model similar to BLP for integrity policies that regulate modification of objects • Integrity levels (such as ‘clean’ or ‘dirty’) are assigned to subjects and objects • The Biba model has policies for an ‘invoke’ operation whereby one subject can access (invoke) another subject • The policies for static integrity levels are the dual of the mandatory BLP policies (tranquility)

  38. Biba with static integrity levels • Simple Integrity Property(No write-up): If subject s can modify (alter) object o, then fS(s)  fO(o) • Integrity -Property:If subject s can read (observe) object o, then s can have write access to some other object o’ only if fO(o)  fO(o’) • Invoke Property:A ‘dirty’ subject s1 must not touch a ‘clean’ object indirectly by invoking s2: Subject s1 can invoke subject s2 only if fS(s1)  fS(s2)

  39. Biba: dynamic integrity levels • Low watermark policies automatically adjust levels (as in the Chinese Wall model): • Subject Low Watermark Policy:Subject s can read (observe) an object o at any integrity level. The new integrity level of s is g.l.b.(fS(s),fO(o)) • Object Low Watermark Policy:Subject s can modify (alter) an object o at any integrity level. The new integrity level of o is g.l.b.(fS(s),fO(o)) • Note: g.l.b.—Greatest Lower Bound • http://winnie.kuis.kyoto-u.ac.jp/foldoc/foldoc.cgi?greatest+lower+bound

  40. Biba policies for protection rings • Ring Property:A‘dirty’ subject s1 may invoke a ‘clean’ tool s2 to touch a ‘clean’ object: Subject s1 can read objects at all integrity levels, modify objects o with fS(s1)  fO(o), and invoke a subject s2 only if fS(s1)  fS(s2). • The ring property is the opposite of the invoke property!

  41. Clark-Wilson Model • Addresses security requirements of commercial applications. ‘Military’ and ‘commercial’ are shorthand for different ways of using computers. • Emphasis on integrity • internal consistency: properties of the internal state of a system • external consistency: relation of the internal state of a system to the outside world. • Mechanisms for maintaining integrity: well-formed transactions & separation of duties

  42. Clark-Wilson Model: Integrity • Integrity –Being authorized to apply a program to a data item that may be accessed through this program. • CW model considers the following points: • Subjects have to be identified and authenticated; • Objects can be manipulated only by a restricted set of programs; • Subjects can execute only a restricted set of programs; • A proper audit log has to be maintained; and • The system has to be certified to work properly.

  43. Clark-Wilson: Access Control • Subjects and objects are ‘labeled’ with programs. • Programs serve as an intermediate layer between subjects and objects. • Access control: • define access operations (transformation procedures) that can be performed on each data item (data types). • define the access operations that can be performed by subjects (roles). • Note the difference between a general purpose operating system (BLP) and an application oriented IT system (Clark-Wilson).

  44. CW: Certification Rules • Security properties are partly defined through five certification rules, suggesting the checks that should be conducted so that the security policy is consistent with the application requirements. • IVPs (initial verification procedures) must ensure that all CDIs (constrained data items) are in a valid state when the IVP is run • TPs (transformation procedures) must be certified to be valid, i.e. valid CDIs must always be transformed into valid CDIs. Each TP is certified to access a specific set of CDIs • The access rules must satisfy any separation of duties requirements • All TPs must write to an append-only log • Any TP that takes an UDI (unconstrained data item) as input must either convert the UDI into a CDI or reject the UDI and perform no transformation at all

  45. CW: Enforcement Rules • Describe the security mechanisms within the computer system that should enforce the security policy. These rules are similar to discretionary access control in BLP. • The system must maintain and protect the list of entries {(TPi:CDIa,CDIb,...)} giving the CDIs the TP is certified to access • The system must maintain and protect the list of entries {(UserID,TPi:CDIa,CDIb,...)} specifying the TPs users can execute • The system must authenticate each user requesting to execute a TP • Only a subject that may certify an access rule for a TP may modify the respective entry in the list. This subject must not have execute rights on that TP • Clark-Wilson is less formal than BLP but much more than an access control model.

  46. Information Flow Models • Similar framework as BLP: objects are labeled with security classes (form a lattice), information may flow upwards only. • A system is secure if there is no illegal information flow. • Information flow described in terms of conditional entropy (equivocationinformation theory) • Information flows from x to y if we learn something about x by observingy: • explicit information flow: y:= x • implicit information flow: IF x=0 THEN y:=1 • covert channels • Proving security is undecidable.

  47. Non-interference models • A group of users, using a certain set of commands, is non-interfering with another group of users if what the first group does with those commands has no effect on what the second group of users can see. • Take a state machine where low users only see outputs relating to their own inputs. High users are non-interfering with low users if the low users see the same no matter whether the high users had been providing inputs or not. • Non-interference is an active research area in formal methods.

  48. The 3rd Design Principle • If you design complex systems that can only be described by complex models, finding proofs of security becomes difficult. In the worst case (undecidable), no universal algorithm exists that verifies security in all cases. • If you want verifiable security properties, you are better off with a security model of limited complexity. Such a model may not describe all desirable security properties, but you may gain efficient methods for verifying ‘security’. In turn, you are advised to design simple systems that can be adequately described in the simple model. • The more expressive a security model is, both with respect to the security properties and the systems it can describe, the more difficult it is usually to verify security properties.

  49. Summary • Models • The BLP model • Multics- a case study of the BLP model • The HRU model • The Chinese Wall Model • The Biba Model • The CW model

More Related