290 likes | 378 Views
Real-world trust policies. Vinicius Almendra Daniel Schwabe Dept. of Informatics, PUC-Rio ISWC’05. Agenda. Problem Statement What Does Trust Mean? The Trust Model Building Real-world Trust Policies An Example Future Work Conclusions. Problem Statement.
E N D
Real-world trust policies Vinicius Almendra Daniel Schwabe Dept. of Informatics, PUC-Rio ISWC’05
Agenda • Problem Statement • What Does Trust Mean? • The Trust Model • Building Real-world Trust Policies • An Example • Future Work • Conclusions
Problem Statement • Scenario: collection of semantic web data • Through exchange: P2P networks, semantic social desktops • Through web navigation: Piggy Bank-like approaches • Problem: is this information trustful?
What Does Trust Mean? • Using a real-world model of trust: “trust is reliance on received information” (Gerck, 1998) • To trust someone or something => To rely on it to achieve some goal • Reliance on a banking Website to move money • Reliance on a car or plane while doing a trip • Reliance on a statistical software • Reliance implies an action (actual or future) – boolean value
Reliance • Reliance is NOT • Blind • Static • Irrevocable • Reliance depends on • Reasoning • Circumstances • Beliefs • Freedom
What Does Trust Mean? • Reliance is useful because • It gives a mental frame to think about trustfulness • It links trust with action, while keeping them apart • Why real-world trust? • The model is being built in order to support an easy mapping from daily trust decisions to a computable representation
The Trust Model • To trust is to virtually rely • Trust is subjective: it depends on who trusts, the trusting agent • Object of trust: facts • Statements about reality • Facts can be just known (asserted) and can also be trusted. • Trust decision: happens when the trusting agent decides that an asserted fact can be trusted
The Trust Model • Trust decision must be reasonable: there must be a justification for accepting that a fact is trustful • Justification is based on beliefs, which are grounded on trusted facts • A trust policy is a set of rules that the trust agent uses to deduce the trustfulness of a fact. It is associated with a goal • Trust policies should be built incrementally
Trust policies • Answer the question: “is this fact trustful?” • Reasoning behind a trust decision can be expressed using classic logic • Trust policy = predicate over a fact asserting its trustfulness • Fact = (s,p,o,c) – subject, predicate, object and context • Reasoning about trusted facts • May use the domain theory of the agent • Example: “I trust that a person A is a friend of a person B when A is my friend and B is known to be a person”
Trust Policies • If the facts below were trusted: • (‘Me’, ‘friend’, ‘John’, ‘My context’) • (‘Erick’, ‘type’, ‘Person’, ‘My context’) • This fact would be trusted • (‘John’, ‘friend’, ‘Erick’, ‘My context’) • But not these one • (‘Mary’, ‘friend’, ‘John’, ‘Mary’s context’) • (‘John’, ‘brother’, ‘Erick’, ‘Robert’s context’)
Trust Policies • Trust axiom • Given a fact (s,p,o,c) • Given a trust policy P
Trust Policies • Trust Policies can be combined through aggregation (union of trustful facts) or specialization (intersection of trusted facts)
An Example – Trust in News Info • Scenario: a person looking for trustful news-related information • We start with three policies: • Self-trust: trust everything contained in “my” context • Context info: trust everything stated about a context • Good News: trust news that come from friends
An Example – Trust in News Info • Policies described as Prolog clauses: trustedFact(S,P,O,C) :- assertedFact(S,P,O,C), goodNewsRelatedInfo(S,P,O,C). goodNewsRelatedInfo(S,P,O,C) :-selfTrust(S,P,O,C). goodNewsRelatedInfo(C,_,_,C). goodNewsRelatedInfo(S,P,O,C) :- goodNews(S,P,O,C). goodNews(_,rdf:type, 'news:News' ,C) :- trustedFact(C, dc:creator, Friend, _), trustedFact(myself, foaf:knows, Friend, my_context).
Implementation • A first implementation was done using named graphs • We moved to logic programs (XSB Prolog) to better represent trust policies • Next step: link these logic programs with a RDF triple store.
Conclusions and Future Work • Simple approach promising • Ongoing work • Handling negation – could be pushed to the underlying KB • Adding support to inference – to take advantage of the domain knowledge • Linking with RDF triple stores • Providing a method to build trust policies that keeps “real-world” property • Build to help users specify policies • Apply to realistic case study