1 / 67

Trust Management

Explore a unified computational model for managing trust, considering reputation, reciprocity, and security, with examples from Amazon and eBay. Learn about decentralized trust mechanisms like DMRep and XRep for robust trust management.

parenteau
Download Presentation

Trust Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trust Management Chen Ding Chen Yueguo Cheng Weiwei

  2. Outline • Introduction • A computational Model • Managing Trust in a Peer-2-Peer System • DMRep • EigenRep • Security Concerns • P2PRep • XRep • Conclusion

  3. Trust Management • “a unified approach to specifying and interpreting security policies, credentials, relationships [which] allows direct authorization of security-critical actions” – Blaze, Feigenbaum & Lacy • Trust Management is the capture, evaluation and enforcement of trusting intentions.

  4. Reputation, Trust and Reciprocity • Reputation: perception that an agent creates through past actions about its intentions and norms. • Trust: a subjective expectation an agent has about another's future behavior based on the history of their encounters. • Reciprocity: mutual exchange of deeds Given social network A reputation Increase ai’s reputation Increase aj’s trust of ai trust reciprocity Increase ai’s reciprocating actions

  5. A computational Model • Defines trust as a dyadic quantity between the trustor and trustee which can be inferred from reputation data about the trustee • Two simplifications • The embedded social networks are taken to be static • The action space is restrict to be: Action: α {cooperate, defect}

  6. Notations for Model • Reputation: θji(c)[0,1] • Let C be the set of all contexts of interest. • Let θji(c) represent ai’s reputation in an embedded social network of concern to aj for the context c C • History: Dji(c) = {E*} • Dji(c) represents a history of encounters that aj has with ai within the context c. • Trust: T (c) = E [ θ(c) | D(c)] • The higher the trust level for agent ai, the higher the expectation that ai will reciprocate agent aj’s actions.

  7. b a Context c A Computational Model (cont…) • θab : b’s reputation in the eyes of a. • Xab(i): the ith transaction between a and b. • After n transactions. We obtained the history data • History: Dab = {Xab(1), Xab(2), … , Xab(n)} • Let p be the number of cooperations by agent b toward a in the n previous encounters.

  8. A Computational Model (cont…) • Beta distribution:p( ) = Beta(c1, c2) • : estimator for θ • c1 and c2: c1=c2=1 (by prior assumptions) • A simple estimator for θab • Assuming that each encounter’s cooperation probability is independent of other encounters between A and B. • The likelihood for the n encounters: L(Dab| )= p(1- )n-p • Posterior estimate for : P( |D) = Beta(c1+p, c2+n-p)

  9. A Computational Model (cont…) • Trust towards b from a is the conditional expectation of given D. Tab = p(xab(n+1)|D) = E[ |D] Where

  10. Outline • Introduction • A computational Model • Managing Trust in a Peer-2-Peer System • DMRep • EigenRep • Security Concerns of the communication channel • P2pRep • XRep • Conclusion

  11. Reputation-based trust management • 2 Examples • Amazon.com • Visitors usually look for customer reviews before deciding to buy new books. • eBay • Participants at eBay’s auctions can rate each other after each transaction. • Both examples use completely centralized mechanism for storing and exploring reputation data.

  12. P2P Properties • No central coordination • No central database • No peer has a global view of the system • Global behavior emerges from local interactions • Peers are autonomous • Peers and connections are unreliable

  13. Design Considerations • The system should be self-policing • The shared ethics of the user population are defined and enforced by the peers themselves and not by some central authority • The system should maintain anonymity • A peer’s reputation should be associated with an opaque identifier rather with an externally associated identity • The system should not assign any profit to newcomers • The system should have minimal overhead in terms of computation, infrastructure, storage, and message complexity • The system should be robust to malicious collectives of peers who know one another and attempt to collectively subvert the system.

  14. DMRep [KZ2001] • An approach that addresses the problem of reputation-based trust management at both the data management and the semantic level • Behavioral data B: • Observations t(q,p) • a peer qP makes when he interacts with a peer pP. • B(p) = { t (p, q) or t (q, p) | q P} B • In a decentralized environment: • How to access trust given B(p) and B • How to obtain such B(p) and B to construct trust.

  15. DMRep • In the decentralized environment, if a peer q has to determine trustworthiness of a peer p • It has no access to global knowledge B and B(p) • 2 ways to obtain data: • Directly by interactions Bq(p) = { t (q, p) | t (q, p) B} • Indirectly through a limited number of referrals from witnesses r Wq P Wq(p) = { t (r, p) | r Wq, t (r, p) B}

  16. DMRep • Assumption: • The probability of cheating within a society is comparably low • More difficult to hide malicious behavior. • Complaint c (p,q) • An agent p can, in case of malicious behavior of q, file a complaint c (p,q)

  17. A simple situation • p and q interact and later on r wants to determine the trustworthiness of p and q. • Assume p is cheating, q is honest • After their interaction, • q will file a complaint about p • p will file a complaint about q in order to hide its misbehavior. • If p continues to cheat, r can conclude p is the cheater by observing the other complaints about p

  18. Reputation calculation • T(p) = |{c(p,q) | q P| x |{c(q,p)| q P}| • High value of T(p) indicate that p is not trustworthy • Problem • The reputation was determined based on the global knowledge on complains which is very difficult to obtain.

  19. The storage structure • P-Grid • Insert (a, k, v), where a is an arbitrary agent in the network, k is the key value to be searched for, and v is the data value associated with the key • Query (a, k): v, where a is an arbitrary agent in the network, which returns the data values v for a corresponding query k • Properties • There exists an efficient decentralized bootstrap algorithm which creates the access structure without central control • The search algorithm consists of randomly forwarding the requests from one peer to the other. • All algorithms scale gracefully. Time and space complexity are both O(logn)

  20. 1 6 2 3 4 5 0:2 01:2 0:6 01:2 1:5 01:2 0:6 10:4 1:3 01:2 1:4 01:2 Stores complaints about and by 1 Stores complaints about and by 4,5 Stores complaints about and by 6 Stores complaints about and by 4,5 Stores complaints about and by 2.3 Stores complaints about and by 1 Decentralized Data Management 0 1 00 01 Query(5,100) 10 11 Query(6,100) Query(4,100) found!

  21. DMSRep • Access Problem: • p still has to decide r’s trustworthiness • Even r is honest, it may not be reachable reliably over the network. p q ? The exploration of the whole network! ? ? ? ? ? ? rq1 rqn rrq11 rrq1n rrqn1 rrqnn … … … …

  22. Local computation of Trust • Assume that the peers are only malicious with a certain probability pi <= pimax <1. • If there are r replicas satisfies on average pirmax < ε, where ε is an acceptable fault-tolerance. • If we receive the same data about a specific peer from a sufficient number of replicas we need no further checks. • It also limits the depth of the exploration of trustworthiness of peers to limit the search space.

  23. Algorithm Check Complaints W = {cri(q), cfi(q), si, fi |i=1,…w} w: number of witness found cri(q): number of complaints q received cfi(q): number of complaints q filed fi: the frequency with which si is found (non-uniformity of the P-Grid structure) p ? q a1 a2 a3 a4 an … s1 s2 s3 sw … • Normalized function • crinorm(q) = cri(q)(1-(s-fi/s)s), i=1,…,w • cfinorm(q) = cfi(q)(1-(s-fi/s)s), i=1,…,w

  24. Algorithm • Function to determine trustworthy Decidep(crinorm(q) , cfinorm(q))= if crinorm(q)* cfinorm(q) ≤ crpavgcfpavg then 1 else -1 • Exploring Trust. • S= SUM(i=1 … w, decide(cr_i, cf_i) • if S=0  Check the Trustworthy of single witness.

  25. DMSRep Discussion • Strength • An approach that addresses the problem at both the data management and the semantic level • The method can be implemented in a fully decentralized peer-to-peer environment and scales well for large number of participants. • Limitations • environment with low cheating rates. • specific data management structure. • Not robust to malicious collectives of peers

  26. Outline • Introduction • A computational Model • Managing Trust in a Peer-2-Peer System • DMRep • EigenRep • Security Concerns • P2PRep • XRep • Conclusion

  27. How does one peer evaluate others? • Directly (by own experience) • sat(i, j): +1, i downloads an authentic file from j. • unsat(i, j): +1, i downloads an inauthentic file from j, or i fails to download a file from j. • local reputation value:sij=sat(i, j)- unsat(i, j). • Indirectly (by others’ experience) • ask neighbors. • ask friends (familiars). • ask authorities (who are more reputable). • ask witness.

  28. Normalizing Local Reputation Value • Local reputation vector: • Most are 0

  29. Aggregating Local Reputation Values • Peer i asks its friends about their opinions on peer k. • Peer i asks its friends about their opinions on all peers. • Peer i asks its friends about their opinions about other peers again. (It seems like asking his friends’ friends)

  30. Global Reputation Vector • Continues in this manner, • If n is large, will converge to the left principal eigenvector of C for every peer i. (C is irreducible and aperiodic) • We call this eigenvector , the global reputation vector. • , an element of , quantifies how much trust the system as a whole places peer j. • Non-distributed Algorithm

  31. Practical Issues • Pre-trust peers: P is a set of peers which are known to be trusted, is the pre-trusted vector of P, where, • Assign some trust on pre-trust peers : • For new peers, who don’t know anybody else: • Modified non-distributed algorithm:

  32. Distributed Algorithm • All peers in the network cooperate to compute and store the global trust vector. • Each peer stores and computes its own global trust value. • Minimize the computation, storage, and message overhead.

  33. Distributed Algorithm (cont…) • Ai: set of peers which have downloaded files from peer i. • Bi: set of peers which peer i has downloaded files.

  34. Message Traffic • Mean number of acquaintance per peer : m. • Mean number of iteration: k. • Mean number of messages per peer: O(mk).

  35. Secure Algorithm • The trust value of one peer should be computed by more than one other peer. • malicious peers report false trust values of their own. • malicious peers compute false trust values for others. • Use multiple DHTs to assign mother peers. • The number of mother peers for one peer is same to all peers.

  36. Secure Algorithm (cont…) … Ai 0 1 5 11 Ai, Bi 02 1 9 5 12 11 # …

  37. Secure Algorithm (cont…) m ?

  38. Secure Algorithm (cont…) H1(9) H1(5) H1(0) H1(12) H1(i) H1(11) H1(1) H1(2)

  39. Modified Secure Algorithm

  40. Message Traffic • Mean number of acquaintance per peer: m. • Mean number of iteration: k. • Number of mothers for one peer: t. • Mean number of message per peer: O(tmk).

  41. Using Global Reputation Values • Isolate malicious peers. • download from reputable peers. • Incent peers to share file. • reward reputation. • Allow the newcomers to build trust. • provide a probability of 10% to be selected. • reward new comers greatly. • Balance the load. • download probabilistically based on trust values. • set up maximum reputation (e.g. sij<MAX Value).

  42. Limitation of EigenRep • Cannot distinguish between newcomers and malicious peers. • Malicious peers can still cheat cooperatively • A peer should not report its predecessors by itself. • Flexibility • How to calculate reputation values when peers join and leave, on line and off line. • When to update global reputation values? • According to the new local reputation vector of all peers. • Anonymous? • A mother peer know its daughters.

  43. Outline • Introduction • A computational Model • Trust management in P2P system • Managing Trust in a Peer-2-Peer System • DMRep • EigenRep • Security Concerns • P2pRep • XRep • Conclusion

  44. P2PRep & XRep • Not focus on computation of reputations • Security of exchanged messages • Queries • Votes • How to prevent different security attacks

  45. P2PRep & XRep • Using Gnutella for reference • A fully P2P decentralized infrastructure • Peers have low accountability and trust • Security threats to Gnutella • Distribution of tampered information • Man in the middle attack

  46. Sketch of P2PRep • P select a peer among those who respond to P’s query • P polls its peers for opinions about the selected peer • Peers respond to the polling with votes • P uses the votes to make its decision

  47. Sketch of P2PRep Cont’d • To ensure authenticity of offerers & voters, and confidentiality of votes • Use public-key encryption to provide integrity and confidentiality of messages • Require peer_id to be a digest of a public key, for which the peer knows the private key

  48. P2PRep • Two approaches: • Basic polling • Voters do not provide peer_id in votes • Enhanced polling • Voters declare their peer_id in votes

  49. P * P * Si P, (SiS) Vi P, (ViV) D P Vj, (VjV’) D Vj P, (VjV’) P2PRep – Basic Polling (a) Initiator P Peers S Query(search_string) QueryHit(IP,port,speed,Result,peer_id) Select top list T of offerers Generate key pair (PKpoll, SKpoll) Poll(T, PKpoll) PollReply( {(IP,port,Votes)}PKpoll ) Remove suspicious votes Select random subset V’ TrueVote( Votesj ) TrueVoteReply(resonse) If response is negative, discard Votesj Select peer s for downloading

  50. D D P s s P P2PRep – Basic Polling (b) Initiator P Peer s Generate random string r Challenge(r) Response([r]SKs, PKs) If h(PKs)=peer_ids && {[r]SKs}PKs=r: download Update experience_repository

More Related