680 likes | 828 Views
Explore a unified computational model for managing trust, considering reputation, reciprocity, and security, with examples from Amazon and eBay. Learn about decentralized trust mechanisms like DMRep and XRep for robust trust management.
E N D
Trust Management Chen Ding Chen Yueguo Cheng Weiwei
Outline • Introduction • A computational Model • Managing Trust in a Peer-2-Peer System • DMRep • EigenRep • Security Concerns • P2PRep • XRep • Conclusion
Trust Management • “a unified approach to specifying and interpreting security policies, credentials, relationships [which] allows direct authorization of security-critical actions” – Blaze, Feigenbaum & Lacy • Trust Management is the capture, evaluation and enforcement of trusting intentions.
Reputation, Trust and Reciprocity • Reputation: perception that an agent creates through past actions about its intentions and norms. • Trust: a subjective expectation an agent has about another's future behavior based on the history of their encounters. • Reciprocity: mutual exchange of deeds Given social network A reputation Increase ai’s reputation Increase aj’s trust of ai trust reciprocity Increase ai’s reciprocating actions
A computational Model • Defines trust as a dyadic quantity between the trustor and trustee which can be inferred from reputation data about the trustee • Two simplifications • The embedded social networks are taken to be static • The action space is restrict to be: Action: α {cooperate, defect}
Notations for Model • Reputation: θji(c)[0,1] • Let C be the set of all contexts of interest. • Let θji(c) represent ai’s reputation in an embedded social network of concern to aj for the context c C • History: Dji(c) = {E*} • Dji(c) represents a history of encounters that aj has with ai within the context c. • Trust: T (c) = E [ θ(c) | D(c)] • The higher the trust level for agent ai, the higher the expectation that ai will reciprocate agent aj’s actions.
b a Context c A Computational Model (cont…) • θab : b’s reputation in the eyes of a. • Xab(i): the ith transaction between a and b. • After n transactions. We obtained the history data • History: Dab = {Xab(1), Xab(2), … , Xab(n)} • Let p be the number of cooperations by agent b toward a in the n previous encounters.
A Computational Model (cont…) • Beta distribution:p( ) = Beta(c1, c2) • : estimator for θ • c1 and c2: c1=c2=1 (by prior assumptions) • A simple estimator for θab • Assuming that each encounter’s cooperation probability is independent of other encounters between A and B. • The likelihood for the n encounters: L(Dab| )= p(1- )n-p • Posterior estimate for : P( |D) = Beta(c1+p, c2+n-p)
A Computational Model (cont…) • Trust towards b from a is the conditional expectation of given D. Tab = p(xab(n+1)|D) = E[ |D] Where
Outline • Introduction • A computational Model • Managing Trust in a Peer-2-Peer System • DMRep • EigenRep • Security Concerns of the communication channel • P2pRep • XRep • Conclusion
Reputation-based trust management • 2 Examples • Amazon.com • Visitors usually look for customer reviews before deciding to buy new books. • eBay • Participants at eBay’s auctions can rate each other after each transaction. • Both examples use completely centralized mechanism for storing and exploring reputation data.
P2P Properties • No central coordination • No central database • No peer has a global view of the system • Global behavior emerges from local interactions • Peers are autonomous • Peers and connections are unreliable
Design Considerations • The system should be self-policing • The shared ethics of the user population are defined and enforced by the peers themselves and not by some central authority • The system should maintain anonymity • A peer’s reputation should be associated with an opaque identifier rather with an externally associated identity • The system should not assign any profit to newcomers • The system should have minimal overhead in terms of computation, infrastructure, storage, and message complexity • The system should be robust to malicious collectives of peers who know one another and attempt to collectively subvert the system.
DMRep [KZ2001] • An approach that addresses the problem of reputation-based trust management at both the data management and the semantic level • Behavioral data B: • Observations t(q,p) • a peer qP makes when he interacts with a peer pP. • B(p) = { t (p, q) or t (q, p) | q P} B • In a decentralized environment: • How to access trust given B(p) and B • How to obtain such B(p) and B to construct trust.
DMRep • In the decentralized environment, if a peer q has to determine trustworthiness of a peer p • It has no access to global knowledge B and B(p) • 2 ways to obtain data: • Directly by interactions Bq(p) = { t (q, p) | t (q, p) B} • Indirectly through a limited number of referrals from witnesses r Wq P Wq(p) = { t (r, p) | r Wq, t (r, p) B}
DMRep • Assumption: • The probability of cheating within a society is comparably low • More difficult to hide malicious behavior. • Complaint c (p,q) • An agent p can, in case of malicious behavior of q, file a complaint c (p,q)
A simple situation • p and q interact and later on r wants to determine the trustworthiness of p and q. • Assume p is cheating, q is honest • After their interaction, • q will file a complaint about p • p will file a complaint about q in order to hide its misbehavior. • If p continues to cheat, r can conclude p is the cheater by observing the other complaints about p
Reputation calculation • T(p) = |{c(p,q) | q P| x |{c(q,p)| q P}| • High value of T(p) indicate that p is not trustworthy • Problem • The reputation was determined based on the global knowledge on complains which is very difficult to obtain.
The storage structure • P-Grid • Insert (a, k, v), where a is an arbitrary agent in the network, k is the key value to be searched for, and v is the data value associated with the key • Query (a, k): v, where a is an arbitrary agent in the network, which returns the data values v for a corresponding query k • Properties • There exists an efficient decentralized bootstrap algorithm which creates the access structure without central control • The search algorithm consists of randomly forwarding the requests from one peer to the other. • All algorithms scale gracefully. Time and space complexity are both O(logn)
1 6 2 3 4 5 0:2 01:2 0:6 01:2 1:5 01:2 0:6 10:4 1:3 01:2 1:4 01:2 Stores complaints about and by 1 Stores complaints about and by 4,5 Stores complaints about and by 6 Stores complaints about and by 4,5 Stores complaints about and by 2.3 Stores complaints about and by 1 Decentralized Data Management 0 1 00 01 Query(5,100) 10 11 Query(6,100) Query(4,100) found!
DMSRep • Access Problem: • p still has to decide r’s trustworthiness • Even r is honest, it may not be reachable reliably over the network. p q ? The exploration of the whole network! ? ? ? ? ? ? rq1 rqn rrq11 rrq1n rrqn1 rrqnn … … … …
Local computation of Trust • Assume that the peers are only malicious with a certain probability pi <= pimax <1. • If there are r replicas satisfies on average pirmax < ε, where ε is an acceptable fault-tolerance. • If we receive the same data about a specific peer from a sufficient number of replicas we need no further checks. • It also limits the depth of the exploration of trustworthiness of peers to limit the search space.
Algorithm Check Complaints W = {cri(q), cfi(q), si, fi |i=1,…w} w: number of witness found cri(q): number of complaints q received cfi(q): number of complaints q filed fi: the frequency with which si is found (non-uniformity of the P-Grid structure) p ? q a1 a2 a3 a4 an … s1 s2 s3 sw … • Normalized function • crinorm(q) = cri(q)(1-(s-fi/s)s), i=1,…,w • cfinorm(q) = cfi(q)(1-(s-fi/s)s), i=1,…,w
Algorithm • Function to determine trustworthy Decidep(crinorm(q) , cfinorm(q))= if crinorm(q)* cfinorm(q) ≤ crpavgcfpavg then 1 else -1 • Exploring Trust. • S= SUM(i=1 … w, decide(cr_i, cf_i) • if S=0 Check the Trustworthy of single witness.
DMSRep Discussion • Strength • An approach that addresses the problem at both the data management and the semantic level • The method can be implemented in a fully decentralized peer-to-peer environment and scales well for large number of participants. • Limitations • environment with low cheating rates. • specific data management structure. • Not robust to malicious collectives of peers
Outline • Introduction • A computational Model • Managing Trust in a Peer-2-Peer System • DMRep • EigenRep • Security Concerns • P2PRep • XRep • Conclusion
How does one peer evaluate others? • Directly (by own experience) • sat(i, j): +1, i downloads an authentic file from j. • unsat(i, j): +1, i downloads an inauthentic file from j, or i fails to download a file from j. • local reputation value:sij=sat(i, j)- unsat(i, j). • Indirectly (by others’ experience) • ask neighbors. • ask friends (familiars). • ask authorities (who are more reputable). • ask witness.
Normalizing Local Reputation Value • Local reputation vector: • Most are 0
Aggregating Local Reputation Values • Peer i asks its friends about their opinions on peer k. • Peer i asks its friends about their opinions on all peers. • Peer i asks its friends about their opinions about other peers again. (It seems like asking his friends’ friends)
Global Reputation Vector • Continues in this manner, • If n is large, will converge to the left principal eigenvector of C for every peer i. (C is irreducible and aperiodic) • We call this eigenvector , the global reputation vector. • , an element of , quantifies how much trust the system as a whole places peer j. • Non-distributed Algorithm
Practical Issues • Pre-trust peers: P is a set of peers which are known to be trusted, is the pre-trusted vector of P, where, • Assign some trust on pre-trust peers : • For new peers, who don’t know anybody else: • Modified non-distributed algorithm:
Distributed Algorithm • All peers in the network cooperate to compute and store the global trust vector. • Each peer stores and computes its own global trust value. • Minimize the computation, storage, and message overhead.
Distributed Algorithm (cont…) • Ai: set of peers which have downloaded files from peer i. • Bi: set of peers which peer i has downloaded files.
Message Traffic • Mean number of acquaintance per peer : m. • Mean number of iteration: k. • Mean number of messages per peer: O(mk).
Secure Algorithm • The trust value of one peer should be computed by more than one other peer. • malicious peers report false trust values of their own. • malicious peers compute false trust values for others. • Use multiple DHTs to assign mother peers. • The number of mother peers for one peer is same to all peers.
Secure Algorithm (cont…) … Ai 0 1 5 11 Ai, Bi 02 1 9 5 12 11 # …
Secure Algorithm (cont…) H1(9) H1(5) H1(0) H1(12) H1(i) H1(11) H1(1) H1(2)
Message Traffic • Mean number of acquaintance per peer: m. • Mean number of iteration: k. • Number of mothers for one peer: t. • Mean number of message per peer: O(tmk).
Using Global Reputation Values • Isolate malicious peers. • download from reputable peers. • Incent peers to share file. • reward reputation. • Allow the newcomers to build trust. • provide a probability of 10% to be selected. • reward new comers greatly. • Balance the load. • download probabilistically based on trust values. • set up maximum reputation (e.g. sij<MAX Value).
Limitation of EigenRep • Cannot distinguish between newcomers and malicious peers. • Malicious peers can still cheat cooperatively • A peer should not report its predecessors by itself. • Flexibility • How to calculate reputation values when peers join and leave, on line and off line. • When to update global reputation values? • According to the new local reputation vector of all peers. • Anonymous? • A mother peer know its daughters.
Outline • Introduction • A computational Model • Trust management in P2P system • Managing Trust in a Peer-2-Peer System • DMRep • EigenRep • Security Concerns • P2pRep • XRep • Conclusion
P2PRep & XRep • Not focus on computation of reputations • Security of exchanged messages • Queries • Votes • How to prevent different security attacks
P2PRep & XRep • Using Gnutella for reference • A fully P2P decentralized infrastructure • Peers have low accountability and trust • Security threats to Gnutella • Distribution of tampered information • Man in the middle attack
Sketch of P2PRep • P select a peer among those who respond to P’s query • P polls its peers for opinions about the selected peer • Peers respond to the polling with votes • P uses the votes to make its decision
Sketch of P2PRep Cont’d • To ensure authenticity of offerers & voters, and confidentiality of votes • Use public-key encryption to provide integrity and confidentiality of messages • Require peer_id to be a digest of a public key, for which the peer knows the private key
P2PRep • Two approaches: • Basic polling • Voters do not provide peer_id in votes • Enhanced polling • Voters declare their peer_id in votes
P * P * Si P, (SiS) Vi P, (ViV) D P Vj, (VjV’) D Vj P, (VjV’) P2PRep – Basic Polling (a) Initiator P Peers S Query(search_string) QueryHit(IP,port,speed,Result,peer_id) Select top list T of offerers Generate key pair (PKpoll, SKpoll) Poll(T, PKpoll) PollReply( {(IP,port,Votes)}PKpoll ) Remove suspicious votes Select random subset V’ TrueVote( Votesj ) TrueVoteReply(resonse) If response is negative, discard Votesj Select peer s for downloading
D D P s s P P2PRep – Basic Polling (b) Initiator P Peer s Generate random string r Challenge(r) Response([r]SKs, PKs) If h(PKs)=peer_ids && {[r]SKs}PKs=r: download Update experience_repository