220 likes | 334 Views
Decentralized P2P Reputation Schemes: Do They Work?. Ivan Osipkov. P2P Systems: Characteristics. Topology Structured: DHT Unstructured: Gnutella Trust Closed System: Long-term, credentials Open System: Anyone can join/leave Dynamics Long-term presence Short-term presence.
E N D
Decentralized P2P Reputation Schemes: Do They Work? Ivan Osipkov
P2P Systems: Characteristics • Topology • Structured: DHT • Unstructured: Gnutella • Trust • Closed System: Long-term, credentials • Open System: Anyone can join/leave • Dynamics • Long-term presence • Short-term presence
Reputation: Variations • Reputation vs. Credit: We don’t care! • Reputation • Example: Ivan has been providing good quality files • Subjective opinion (Others don’t think so.) • Credit • Example: Ivan has downloaded 10 GB and uploaded 5 GB. • Objective • Global vs. Local reputation • Global: everyone has the same view • Local: “Yongdae liked the files Ivan provided.” • Global Reputation = f all participants (Local Reputation) • Everyone chips in: • How? What is the meaning of the result? • Chains of trust: every peer extends his view of the system. Is trust transitive?
Reputation of what? • Subjective: S, Objective: O • Peer actions • Is the peer behaving ok? S • QoS: Is he providing you the requested service? SO • Services • Transferable services: files, information • Non-transferable: CPU, storage • Combining reputation of services and peers • Easy when service non-transferable • Harder when it’s transferable: How?
Reputation Approaches • Polling • Ask others about a peer • Witnesses • Ask (secure) witnesses of a peer • Distributed Calculations • Everyone chips in • Result is the global reputation of a peer
Attacks on P2P Systems • Goal of the Attacks • Reputation escalation of the attacker • Reputation de-escalation of the good guys • Taxonomy based on Collaboration • Uncoordinated attacks • Collusions • All attackers benefit: A group is acting as a single peer in order to boost each other’s reputation • Ex) Black-Hole: one member benefit • A group acting as a single peer to boost reputation of one member • Sybil attack • A user generates multiple identities and launch collaboration attack. • Are 2 peers actually the same peer? Different IP addresses means different peers? Peers may change IP addresses? • Asking for reputation could be meaningless (Most are vulnerable.) • Assuming PKI does not mean Sybil attack is impossible. (Depending on the easiness of generating certificate)
Paper Lists • Ernesto Damiani, De Capitani di Vimercati, Stefano Paraboschi, Pierangela Samarati and Fabio Violante, "A Reputation-Based Approach for Choosing Reliable Resources in Peer-to-Peer Networks", CCS 2002 • A.A. Selcuk, E. Uzun and M.R. Pariente," A Reputation-Based TrustManagement System for P2P Networks", NDSS 2004. • S. D. Kamvar, M. T. Schlosser and H. Garcia-Molina, "The EigenTrust Algorithm for Reputation Management in P2P Networks", WWW 2003. • T. Ngan, D. S. Wallach and P. Druschel, "Enforcing Fair Sharing of Peer-to-Peer Resources", IPTPS 2003 • L. P. Cox and B. D. Noble, "Samsara: Honor Among Thieves in Peer-to-Peer Storage", SOSP 2003 • V. Vishnumurthy, S.Chandrakumar and Emin Gun Sirer, "KARMA : A Secure Economic Framework for Peer-To-Peer Resource Sharing", citeseer.nj.nec.com/582637.html • H. Zhang, D. Dutta, A. Goel and R. Govindan, "The Design of A Distributed Rating Scheme for Peer-to-peer Systems", Workshop on Economic Issues in Peer-to-Peer Systems, 2003
Polling for Reputation: Xrep (CCS) • Scheme • Assuming PKI • Probabilistic Polling: Ask community what they think about the peer and the file • Vote clustering based on IP address: to prevent Sybil and collaboration attack • Analysis • Secure but ineffective • How many nodes know about the peer/file? • The rich gets richer • What if peers do not respond to polling? What is their incentive (especially the free-loaders)?
Polling and Trust Chains (NDSS) • Trust calculations • Recent transactions matter most • Distrust is a priority and persists for threshold transactions • Recommender is punished for bad recommendation • Transactions restricted to locally known peers as much as possible • Properties • Polling for reputations • New peers have 0 credit • Concentration on spreading of malicious files
NDSS: Problems • Collaboration attack: hard to detect • Collusions undetected • Black hole undetected • Relies on establishing long-term trust among local neighbors • Storage overhead • Sybil • Easy to shed distrust: change identity • Dynamic joins/leaves not addressed
Yet Another Scheme (Goel et. al.) • Polling avoided: every user (or the witnesses) keep his own reputation • Reputation ages • Based on the number of votes • Must participate continuously • Vote collection: • A user submits a list of voters and some are asked for votes easy to shed bad reputation • A user has (DHT-based) witnesses that keep reputation Main approach
Yet Another Scheme Cont’d • Credit • One must work disproportionately more to increase credit • Old credit does not matter • No credit to newcomers • But newcomers easily establish reputation • Witnesses must be trusted • Static witness • How to choose in an unstructured network? • Witness can be corrupted (or bribed). • Don’t care about bad/good mouthing • Not enough details about exchanging votes
Distributed Computing: Basic EigenTrust • Google’s PageRank Algorithm applied to reputation: • (CT)n x p converges to principal eigenvector, i.e. the reputation vector • c(i,j): trust of peer i for peer j • ci= a (ci,1, ci,2, … , ci,n) + (1-a) (p1, p2, … , pn) where ci={ci,k} and p is derived from public pre-trusted peers • 6-8 rounds: let t0=p • Each peer i computes its updated global reputation using the trust others have for him: t(K+1)i=(1-a)(c1,it(k)1+…+ c1,nt(k)n)+api • Each peer exchanges ci,j t(K+1)i with peers with whom he interacted before • Problems: • Need pre-trusted peers • Efficiency: • Lock step synchronization • Updates require re-calculation • Trust transitivity is assumed • Trust is normalized: 2 peers with trust 10% count the same as 2 peers with trust 100% • Efficiency: • Lock step synchronization • Updates require re-calculation • Total message size in the system is 6n x m2where m is the average number of peers a peer interacted with • Attacks: • Credit escalation/de-escalation: single peer can change his reputation and reputation of any peer with whom he interacted through incorrect calculation
EigenTrust with Managers • Each peer has a set of managers • Managers have the trust vector of the peer • And they perform the above calculations on his behalf • Problems: • Need consensus of managers • How to choose managers in unstructured network • Additional overhead • Attacks: • Uncoordinated: peer incorrectly informs his managers of his trust in others • Collusion: majority of managers collude resulting in incorrect calculation of peer’s reputation and reputation of everyone from whom he downloaded • Sybil attack
KARMA • Every peer has a set of managers • Managers form a bank • All transactions go through banks: banks of interacting peers coordinate on credit transfer • Consensus of managers needed • Credit transfer framework • DHT layer used • Problems: • Storage grows to keep old history • Bank overhead • Motivation to become a bank is not clear • Attacks: • Collusion: • Bank majority must be trusted • Banks are static and bribery is profitable • Banks for phantom nodes: the bank claims that the peer with a lot of credit is offline • Sybil attack • Each peer has initial credit: through formation of new peers he can download for free • Uncoordinated: • Use initial credit and run (trial offer syndrome)
Random Auditing Scheme (Druschel) • Every peer has a book: • Advertised storage capacity • Where his files are stored remotely • Whose files are stored locally • Each peer can store remotely no more than its advertised quota • Peer policing • each peer A that stores file F for peer B anonymously checks the book of peer B • If the books states that the file is not stored remotely, A drops file F • Random auditing • periodically each peer A audits book of a random peer B • For each file F stored at node C, A checks that C’s book states the same
Random Auditing: Problems • General Problems: • If a peer goes offline, all his files are dropped • Need DHT for auditing • Peers are forced to store files if they have free space according to the books: Cannot refuse any storage request. • Need to assume that everyone’s capacity is fixed • Peers have to constantly police others that store files for them • Catching intentionally incorrect books is probabilistic • Attacks: • Collusion: • Book of peer A: states that it stores file F for peer B • Book of peer B: does not state that it stores file F at A • Peer A is audited by C and discovers discrepancy • Who is lying? The only thing C can do that is ask A to drop F and adjust its book • Uncoordinated: • Peer A states that it stores file at or for peer B • Impossible to tell who is lying again • If peer B is punished according to the protocol, A wins.
Samsara: Bartering Approach • The barter is the token file of the same size • Token can be transferred to others resulting in chains and/or cycles • Chains are frequent • Cycles are rare • Probabilistic dropping • Challenge-response on both files and tokens • Problems: • Chains reduce storage overhead, but dangerous when failure occurs in the chain: files down the chain are dropped! • Cycles avoid this problem but are rare • 100% bandwidth overhead • Computational overhead • Storage overhead is n*F/c, where F is average file size, c is the average chain length (4) and n is the total number of peers
Samsara: Attacks • Uncoordinated attacks • Can maintain files without contributing • Going offline in the chain leads to loss of files down the chain • Files themselves can be used as tokens • Peer A sends F to peer B • Peer B sends E(F) as the token to peer A • When challenged for F, peer B challenges for E(F) • In effect, peer B does not store F
Generic Approaches to Reputation • Broadcast approach: each peer communicates to everyone else a list of reputations • Overhead • Everyone must be online • Need a reliable broadcast protocol • Polling: • Does not scale • If negative votes are a minority, the peer can still participate • Static witness approach: • Need a structured topology • Need to be secure • Should be able to verify transactions • Dynamicity of the group needs to be dealt with • Peer leaves/joins are of concern • Dynamic witness approach: • Witnesses change with time • Witness collusions become harder • How do we choose witnesses? • Witness transfer from a colluding group of witnesses to an honest one should not lead to propagation of incorrect information
Long-Term vs Short-Term Reputation • Long-term • Rich peers dominate • Can afford to be negative • Long-term state on all leaving peers is required • Witness bribery pays off even when witnesses are dynamicly changing • Collusions and Sybil attacks profitable • Short-term • Credit lost over time • Need to participate continuously • Peer credit lost when they leave: no long-term state is required in the system • New peers should acquire working credit quickly • Easy to shed bad reputation • Change of colluding witness set to an honest set does not propagate the error
General Directions • Short-term vs long-term: which one? • All solutions when applied to long-term reputation • Are unscalable and/or inefficient in dynamic decentralized P2P • Need to drop old reputation at some point • Short-term reputation • Problems: • aging reputation: Hard to accept in scientific community • shedding of bad reputation: Bad guys return • Still main attacks can be greatly alleviated • Need to understand how aging should be done correctly • A number of papers use it • Obtaining reputation information • Polling ineffective in large systems • Distributed calculation is too expensive • Dynamic changing witnesses appears to be the best choice