1 / 45

Reputations Based On Transitive Trust

Reputations Based On Transitive Trust. Slides by Josh Albrecht. Overview. Transitive Trust Examples Problem Background and Definition Example Algorithms Sybil Attacks More Definitions Two Theorems on Impossibility of Defense Against Sybil Attacks [Friedman et al, 2007]

Download Presentation

Reputations Based On Transitive Trust

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reputations Based On Transitive Trust Slides by Josh Albrecht

  2. Overview • Transitive Trust Examples • Problem Background and Definition • Example Algorithms • Sybil Attacks • More Definitions • Two Theorems on Impossibility of Defense Against Sybil Attacks [Friedman et al, 2007] • Solution—Two More Theorems • Practical Implications • Related Theorems [Altman & Tennenholtz, 2007]

  3. Transitive Trust-Based Reputations • Problem: Want to decide how much to trust some entity in the presence of subjective feedback • Solution: Use transitive trust—an entity’s reputation determines how much we trust a piece of feedback from that entity. • ie, if A trusts B, and B trusts C, then A trusts C more than unknown node D • Basically, we start with a set of trusted nodes, and expand the notion of trust recursively from there

  4. Real Life Examples

  5. Transitive Trust-Based Reputations 0.1 0.35 0.4 0.4 0.9 0.5 0.02 0.05 0.45 0.25 0.02

  6. Example Trust Mechanisms • Pathrank • Max Flow • PageRank

  7. Definitions • Trust Graph: • Set of players (vertices): • Set of edges: • Trust values: • Reputation function: • Reputation of • is symmetric iff commutes with permutation of the node names

  8. Example Trust Mechanisms • Pathrank • Max Flow • PageRank

  9. PathRank Example 0.1 0.35 0.4 0.4 0.9 0.5 0.02 0.05 0.45 0.25 0.02

  10. Max Flow Example 0.1 0.35 0.4 0.4 0.9 0.5 0.02 0.05 0.45 0.25 0.02

  11. PageRank • Initial algorithm behind Google’s ranking of webpages • Each page has a PageRank score • Outgoing links give 1/PageRank score to their targets • Simplified Algorithm: [Wikipedia, 2008] • Simulate surfer that starts at a random page and randomly clicks links, with a 15% chance of going to a completely random page. • Resulting rankings are approximately equal to the chance that such a surfer will be on that page at any given time

  12. PageRank Example 0.33 0.33 0.33 0.25 1 0.25 0.25 0.25 0.5 1 1 1 0.5

  13. Problems With Transitive Trust • We will be assuming the network and all data is known • Players have no incentive to provide trust values • There may be strong incentive to provide incorrect trust values • Ideally we want a reputation system that is rank-strategyproof: v cannot improve his rank ordering by strategic choices of t values. • …unfortunately, any nontrivial, monotonic, symmetric reputation system cannot be rank-strategyproof. • This is easy to see. Any time another node that you have interacted with is higher ranked than you, just drop your outgoing edge to them to bring them down

  14. Sybil Attacks • A single agent creates many other fake players (sybils) with the goal of improving the agent’s reputation • The malicious agent can make any structure of links and trust between sybils and himself • Incoming trust links can be redirected from the original malicious agent to any of the sybils in a way that preserves the overall amount of incoming trust

  15. Sybil Attack Example

  16. More Definitions: Sybil Strategy Given graph and user v we say that and subset is a sybil strategy for v in G if and collapsing into a single node v in yields G. Thus a sybil strategy is denoted , and we refer to as the sybils of v.

  17. G V

  18. V

  19. V

  20. More Definitions: Value-Sybilproof A reputation function F is value-sybilproof if for all graphs, there is no sybil strategy of node v that can cause v to have a higher reputation value than in the original graph.

  21. More Definitions: Rank-Sybilproof A reputation function F is rank-sybilproof if for all graphs, there is no sybil strategy that can cause node v to outrank a node w if v did not outrank w in the original graph.

  22. Theorem 27.5 Theorem: There is no nontrivial symmetric rank-sybilproof reputation function. Informal Proof: Given a graph with rank(v) > rank(w), let the sybils of v be a duplicate of the entire graph Then by symmetry, there is some node u in the sybil set such that rank rank(u) = rank(w) Thus, F is not rank-sybilproof. QED

  23. Theorem 27.5 v Original Graph (G) w New Graph (G1) v u w

  24. Theorem 27.5 Theorem: There is no nontrivial symmetric rank-sybilproof reputation function. Proof: Given and reputation fn F Let Consider where By symmetry Thus, F is not rank-sybilproof. QED

  25. Last Definition: K-Rank-Sybilproof Reputation function F is K-rank-sybilproofiff it is rank-sybilproof for all sybil strategies with

  26. Theorem 27.7 Theorem: There is no symmetric nontrivial K-rank-sybilproof for K > 0 Informal Proof: Consider the setup from the previous proof There is some node w that outranks v in the original graph and is equal to u in the final graph Consider the process of slowly constructing the duplicate graph At some point, adding a single node will cause the rank(u) >= rank(w) Then adding that single node is a successful sybil strategy for u in that particular graph Thus F is not rank-1 sybilproof on all graphs

  27. Theorem 27.7 Original Graph (G) w New Graph (G1) w

  28. Theorem 27.7 Original Graph (G) w New Graph (G1) w

  29. Theorem 27.7 Original Graph (G) w New Graph (G1) v w

  30. Implications • All symmetric reputation functions are vulnerable to this attack • Ex: PageRank, SEO, and spam websites • Solution? • Use asymmetric approaches (seed set, real-world solution) • Next theorems prove sybilproofness for max flow and shortest path reputation functions

  31. Theorem 27.8 Theorem: The max-flow based ranking mechanism is value-sybilproof Proof: Max Flow = Min Cut All sybils of v must be on the same side of the cut as v, thus not on the same side as the source s Thus, no sybil can have a higher value than the min cut, which is equal to , QED

  32. Max Flow Example

  33. Theorem 27.9 Theorem: The Pathrank reputation mechanism is value and rank-sybilproof Proof: Sybils cannot decrease the length of the shortest path, thus it is value-sybilproof For rank-sybilproofness, note that a node v can only affect another node w’s ranking if v is on the shortest path to w. But if that is true, then . QED

  34. Practical Implications • SybilGuard [Yu et. al., 2006] • Some researchers at Intel have done an empirical study of defense against Sybil attacks • They use path distance (asymmetric measure) to get around these symmetry problems • SEO • The internet works at all because there is a set of sites that we know have good reputations, so PageRank worked (at least in the past) • Also, creating sybils in this domain (web page reputation) is expensive and difficult • P2P • Some researchers have looked at how these principles apply in the P2P setting, where users want to know which other nodes will give them valid copies of the file, and have good performance

  35. Other Properties of Reputation Ranking Mechanisms • Weak Positive Response: adding an edge from u to v will not decrease the rank of v • Strong Positive Response: if w and v have equal ranks, adding an edge from u to v will increase the rank of v

  36. Other Properties of Reputation Ranking Mechanisms • Minimal Fairness: when there are no edges, all players have the same rank • Weak Monotonicity: if the set of vertices with edges going to v is a superset of the set of edges with vertices going to u, then v does not have a lower rank than u • Strong Monotonicity: if the set of vertices with edges going to v is a strict superset of the set of edges with vertices going to u, then v has a higher rank than u

  37. Other Properties of Reputation Ranking Mechanisms Old graph New graph • Weak Union Condition: If v is ranked <= u in G, then v is ranked <= u in a new graph consisting of G and some other arbitrary graph H. • Strong Union Condition: If v is ranked <= u in G, then v is ranked <= u in a new graph consisting of G and some other arbitrary graph H even if edges are allowed between G and H in the new graph.

  38. Approval Voting Ranking Definition: v is ranked <= u iff the number of incoming edges of v is <= the number incoming edges of u. Fact: The Approval Voting ranking mechanims satisfies minimal fairness, strong monotonicity, strong positive response, the strong union condition, and infinite non-triviality.

  39. Incentive Compatibility • Incentive Compatible: F is incentive compatible if the expected utility from its ranking is not affected by manipulating its outgoing edges. • Strongly Incentive Compatible: F is incentive compatible for all nondecreasing utility functions. • Weakly Incentive Compatible: F is incentive compatible for all utility functions of the form a*k+b, where a and b are real numbers and k is the rank.

  40. Incentive Compatibility Without Minimum Fairness Proposition: There exists a ranking system F1 that satisfies strong incentive compatibility, strong positive response, infinite non-triviality, and the strong union condition.

  41. Incentive Compatibility With Minimum Fairness Theorem: There exist weakly incentive compatible, infinitely nontrivial, minimally fair ranking systems F2, F3, F4, that satisfy weak monotonicity; weak positive response; and the weak union condition respectively. However there is no weakly incentive compatible, nontrivial, minimally fair ranking mechanism that satisfies any two of those three properties. Theorem: There is no weakly incentive compatible, nontrivial, minimally fair ranking system that satisfies either one of the four properties: strong monotonicity, strong positive response, the strong union condition, or strong incentive compatibility.

  42. Conclusions • We’ve seen a bunch of results about the possibility for various types of transitive trust reputation mechanisms • It’s very hard/impossible to make such mechanisms fair (symmetric) and incentive compatible (immune to malicious behavior like sybil attacks) • Asymmetry (treating certain nodes as more reliable than others) can solve these problems. • There are real world problems directly connected to these theoretical results (PageRank, P2P systems)

  43. Thanks!

  44. Theorem 27.7 Theorem: There is no symmetric nontrivial K-rank-sybilproof for K > 0 Formal Proof: Consider the previous proof. Let be the original vertex set Let be the duplicate. Let Let

  45. Theorem 27.7 Proof (continued) Then while Thus but Let m be the node in that has the greatest reputation in The either or It follows that the addition of node ut+1 is a successful sybil strategy for m in Gt. Thus F is not 1-rank-sybilproof on all graphs. QED.

More Related