540 likes | 709 Views
POLIPO: Policies & OntoLogies for Interoperability, Portability, and autOnomy. Daniel Trivellato. Outline. Problem Definition Approach POLIPO Language requirements Policy language syntax Reputation system Credential Chain Discovery Algorithm. Example Scenario.
E N D
POLIPO: Policies & OntoLogies for Interoperability, Portability, and autOnomy Daniel Trivellato
Outline • Problem Definition • Approach • POLIPO • Language requirements • Policy language syntax • Reputation system • Credential Chain Discovery Algorithm
Example Scenario NATO surveillance mission GBR Goals CANADA USA NATO Definitions read if Senior Officer Senior Officer is an Officer with at least 10 years of service Aaahhhhhh!!!! Senior Officer??? ITA
Problem Definition (1/2) • Goal: Situational awareness in a System of Systems • independent, heterogeneous components • DISTRIBUTED AUTHORITY • MUTUAL UNDERSTANDING • dynamic (re-)configurations (join and leave) • AVAILABILITY • ACCOUNTABILITY
Problem Definition (2/2) • Security goals: • protection of sensitive data from unauthorized disclosure, using content- and context-aware security policies • secure interaction between (possibly untrusted) parties of dynamic coalitions • interoperability between heterogeneous systems and policy models, tuning local policies to ensure global security
Proposed Solutions • Access Control to specify the permissions of subjects on objects • Trust Management to establish trust between unknown parties • Ontologies to enable mutual-understanding
Ontologies (1/2) • Formally represent domain knowledge • Define concepts, instances and (binary) relationships in a domain • Constraints allow to infer information not explicitly stated • Each ontology can refer to concepts defined in another ontology (reusability) NATO:Allied Country MO:Officer MO:worksFor NL PSD:Junior Officer PSD:Senior Officer Jack John
Ontologies (2/2) • Ontologies can be used to give semanticsto predicates in rules • Ontologies can also be used to align AC models • However, in a distributed system … • two entities may refer to the same object with different names • two entities may use the same name to refer to different objects
Application Domains • Semantic Web • Data protection on the web • Business Processes for Web Services • Virtual organizations • Maritime Safety and Security (MSS) • Healthcare • Business to Business (B2B)
Language Requirements • Requirement 1: INTEROPERABILITY • Requirement 2: AUTONOMY • Requirement 3: PORTABILITY
R1 - Interoperability Parties shall be able to interact with each other unambiguously Ontologies denote the semantics of concepts and relationships in the domain
R2 - Autonomy Every party shall be able to design and express its policy autonomously A party must be able to specify its policy independently from the actions and definitions of other parties
Example • Global ontology Officer Junior Senior DISJOINT Officer Officer Temporary Temporary Party 1 Party 2 Officer Officer Localextensions to the global ontology • Mappings from local to global concepts WHO DOES THE MAPPINGS? HOW DO WE GUARANTEE THEIR CORRECTNESS?
R3 - Portability Remote evaluation of policies shall preserve the interpretation of the policy owner • Remote policy evaluations should not grant any permission that would not be granted by a local evaluation • Use credentials to preserve interpretation of the policy owner
Language Syntax • Atoms • Atoms are used to build rules • Sets of rules make policies
Syntax: Basic Constructs • Ontology atoms: queries to the knowledge base, represented by an ontology • e.g., psd:SeniorOfficer(‘John’) psd:worksFor(‘John’,’BS’) • Credential atoms • e.g., cred(‘BS’,’psd:SeniorOfficer’,’John’, [(‘psd:validUntil’,’31/12/2009’]) • Authorization atoms • e.g., perm(‘psd:read’, ‘John’, ‘File’) • Constraints: built-ins or user-defined predicates • e.g., X = Y + 3, aboutSuveillance(‘File’)
Syntax: Rules • Horn clauses of the form h b1,…,bn • h (head) is an atom • b1,…,bn (body) are literals (i.e. positive or negative atoms) • Negation is treated as negation as failure • Safety condition: each variable in h, in a negative literal, or in a built-in also occurs in a positive body literal
Credential Release Rules • The head is a credential atom • The body can contain positive credential and ontology atoms, and constraints Example: cred(‘BS’,‘psd:SeniorOfficer’,X,[]) psd:SeniorOfficer(X)
Authorization Rules • The head is an authorization atom • The body can contain positive credential, authorization and ontology atoms, constraints, and negative ontology and constraints Example: perm(‘psd:read’,X,Y) aboutSurveillance(Y), cred(‘BS’,‘psd:SeniorOfficer’,X,[])
Constraint Definition Rules • The head is a user-defined predicate • The body can contain positive ontology atoms and constraints Example: aboutSurveillance(X) bs:aboutMission(X,‘Surveillance’), bs:sensitivityLevel(X,Y), Y<3
Policies • Credential Release Policy: set of credential release rules • Authorization Policy: set of authorization rules
Problems… • Local models may not match the global ontology model • Global terms might be too coarse-grained to describe a specific domain • Policies need precise definitions to guarantee security within a domain • A complete and precise vocabulary alignment is costly • Not feasible in short- and mid-term cooperation
Problems… GBR Officer OF-4 OF-3 OF-1 OF-2 ITA
…and Solution • Local terms to provide fine-grained definitions • Flexible mapping of • local to global terms • local to local terms • MORE AUTONOMY • INTEROPERABILITY • AVOID CONFLICTING DEFINITIONS
Ontology Alignment (1/2) Officer GBR Admiral Captain Commodore Lieutenant Goals read if OF-3 Ufficiale ITA Generale Tenente Colonnello Maggiore Capitano
Ontology Alignment (2/2) • Mapping local to global concepts is necessary for mutual-understanding • Mapping local to local concepts is also a possibility • However, mappings can be imprecise • no 100% equivalent concepts • entities have different mapping capabilities • Who performs the mapping? How? How do we know if we can trust it?
TM + Reputation System • Extend ontology-based TM with a reputation system • every peer can define a mapping between two concepts • the trustworthiness (reputation) of a peer depends on the affinity of its opinions with those of the other peers • the final mapping is obtained by combining subjective opinions of peers based on their reputation
Mapping two Concepts • Expressed by similarity credentials • e.g., sim(GBR,’Captain’,’SeniorOfficer’, [(degree,0.7),(timeStamp,2009/09/09)]) • Reflects inequality between concepts • Signed non-repudiation • Similarity Credentials Repository • Exchanged through gossip protocols • More entities can express the similarity about the same concepts • contrasting opinions • which one should be considered?
Naïve approach • Combine all the opinions • the average similarity degree is the “correct” one • Not all peers are equally trustworthy • Similarity statements discriminated according to peer’s reputation
Reputation • Reflects the accuracy of the similarity statements of a peer • Based on agreement with other peers • The agreement between two peers is proportional to the affinity of their similarity statements • Steps to compute reputation • For each pair of comparable similarity statements, compute their affinity • For each pair of peers, compute their agreement • Compute the reputation of all peers
Affinity • Measures the level of correspondence between non-contradicting statements • st is a local similarity threshold that establishes when two statements are contradictory
Local Similarity Threshold • Low values of st increase the number of statements considered • High values of st lead to a more accurate identification of trustworthy peers
Agreement • Agreement values represented as a matrix • Updated when new credentials are acquired
Computing Reputation • The reputation of a peer is a value in [0,1] • It is based on its agreement with the other peers, weighted by their reputation • The formula converges after t iterations • α is used to bias the computation on the initial reputation and guarantees convergence • More details in the paper…
Example • for st = 0.6 • Order of navies: WS, BS, GC, GS • Initial reputation: 1, 0, 0, 0 • Final reputation values: 0.81, 0.70, 0.89, 0.14
Reputation-based Similarity • Computes similarity of attributes based on similarity statements • Weighted by the reputation of the issuer • Excluding opinions of untrustworthy peers • rt is a reputation threshold. Similarity credentials of peers with reputation lower than rt are discarded
TM + Reputation System • Similarity can be exploited in rules • Peers may accept credentials about any attributes similar to a given attribute • perm(read,X,File1) cred(GBR,Ally,Y), cred(Y,Z,X), similar(0.5,Z,Captain) ≥ 0.6 • A peer can express policies just with known vocabulary AUTONOMY • Peers are able interpret unknown terms by similarity INTEROPERABILITY
Credential Chain Discovery • Credentials must be derived on request • To derive a credential c a peer needs to collect all the credentials on which c depends • Where do we find them? Who performs all the computations? • We need an algorithm to define a storage schema and a retrieval method
The RT algorithms • 3 algorithms: • Backward search: top-down • Forward search: bottom-up • Bi-directional search • Designed to answer different query types • Work if some requirements about credential storage location are satisfied
Query Types • 3 possible query types • Type 1: cred(TU/e,student,Alice)? • Type 2: cred(TU/e,student,X)? • Type 3: cred(X,Y,Alice)? • Where do we start searching?
Credential Storage • Query: Is Bart employee of an accredited university? • All credentials stored by the issuer • Ask for all accredited universities • Ask to each university if Bart is a student • All credentials stored by the subject • Ask Bart all credentials • Ask to all issuers for entailed credentials… • Bart has 1000 credentials, 900 confidential… • Combine the two…
But… • Consider • cred(TU/e,student,X) cred(PD,student,X) • cred(PD,stud,Bart) • Query: Is Bart a TU/e student? • Now, what happens if both credentials are stored by the PD? • We cannot answer the query as we do not know where to start from
Well-typed Credentials • We need to regulate where credentials can be stored • Credential and credential rules must be well-typed • Only if credentials are well-typed all the solutions can be retrieved • More details in the paper…
Backward Search Algorithm • Top-down • Credentials stored by the issuer! • Build a graph in which nodes are labeled by roles • Each node gets a “list of participants” • Advantages • Goal-directed • Decentralized
Example cred(DSA,student,X) cred(DG,accredited,Y), cred(Y,student,X) cred(DG,accredited,TU/e) cred(DG,accredited,UT) cred(DG,accredited,UvA) cred(DG,educationalInstitution,TU/e) cred(WUA,qualityInstitution,TU/e) cred(TU/e,student,X) cred(PD,student,X) cred(PD,student,Alice) cred(PD,student,Bart) cred(PD,student,Charlie) cred(ABN,client,Bart) cred(VISA,ccard,Bart)
Example Query: cred(DSA,student,Bart)? DG Accredited TU/e UT UvA PD student Alice DSA student Bart TU/e student Alice Charlie Alice Bart Bart Charlie ……… Charlie UT student ……… ……… UvA student ………
Forward Search Algorithm • Bottom-up • Credentials stored by the subject! • Build a graph in which nodes are labeled by roles or principals • Each node gets a “list of roles it participates to or it is a subset of” • Disadvantages: • privacy issues!
Example cred(DSA,student,X) cred(DG,accredited,Y), cred(Y,student,X) cred(DG,accredited,TU/e) cred(DG,accredited,UT) cred(DG,accredited,UvA) cred(DG,educationalInstitution,TU/e) cred(WUA,qualityInstitution,TU/e) cred(TU/e,student,X) cred(PD,student,X) cred(PD,student,Alice) cred(PD,student,Bart) cred(PD,student,Charlie) cred(ABN,client,Bart) cred(VISA,ccard,Bart)