1 / 14

Threat Scenarios and Trust Dynamics in Reputation Enabled Systems

This study explores the trustworthiness of individuals in dynamic and heterogeneous environments, focusing on information gathering, penalties, and reputation management. It examines direct and indirect attacks in both individual and community settings, and proposes strategies for resistance and evaluation mechanisms.

aseyler
Download Presentation

Threat Scenarios and Trust Dynamics in Reputation Enabled Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Threat Scenarios and Trust Dynamics in Reputation Enabled Systems Alfarez Abdul Rahman, Stephen Hailes and Mohamed Ahmed 8th of June 2004

  2. Ubicomp Security Environmental characteristics: • Underlying systems are highly dynamic and mobile • There is massive heterogeneity in the components and services available • Components have a limitedview of the global environment • Principals have conflictingbeliefs, desires and intentions • There are nogeographical boundaries and organisational boundaries are fuzzy Determine the trustworthiness of individuals in such environments: • What information can be used to determine this? And how can it be used? • Where should this information be gathered from? • What penalties can be in place to support acting on trusting intentions?

  3. Motivation • No well defined threat models for decentralised trust management • No defined scope for decentralised trust management • To move away from ad-hoc models and enable direct comparison on trust/reputation evaluation mechanism • Most models assume malicious agents with fixed and simple strategies • and finally .. Credibility

  4. Aim • Trust management: Situate decision making in the local context of interaction: Based on information a resource can gather, the risks it faces, the potential threat posed by a trustee and the local policies of interaction: • Share information to asses the likely behaviour of individuals • Minimise the impact of subversion by segregating malicious agents

  5. Scope • Environment: • Decentralised • Temporal • Locally persistent names • Subjects: • An individual benign agent • A community of benign agents • A malicious agent • A community of malicious agents

  6. Threat model* *This model ignores the nuisance agent

  7. Direct Attack: Benign • Aim: • Convince the target that the attacker is trustworthy: • Method: Embellish the attackers reputation • Act cooperatively for N cycles, • Defect against the target at cycle N+1 • Defect against a member of the targets embedded social network, reducing the value of the targets opinions • Result: Exploitation • Target is exploited

  8. Indirect Attack: Benign • Aim: • Convince the target’s embedded social network that it is uncooperative. • Result: Destructive • Isolate the target from the community thereby denying them or reducing the quality of the service they receive

  9. Community • Structurally predefined: i.e. Members of a board • Emergent: i.e. Friends of friends • Strong/dense: Large degree of connectedness • Weak: Small degree of connectedness

  10. Direct Attack: Community • Aim: • Reduce the value of the communities opinions to other communities • Divide and conquer • Undermine the opinions of individual members of a community • Result: • Exploitation • Weakening • Isolation • Destruction

  11. Indirect Attack: Community • Aim: • Reduce the quantity of information available to community members: Weakening the community • Segregate the community: Isolating/Destroying the community • Undermine the opinions of individual members of a community

  12. The Malicious Agent • Aim: One shot exploit or Destroy • What determines the aim? • The cost/benefit analysis of the attack: What is the cost of launching an attack? • How long does it take to develop an influential reputation? • How much do we gain from the attack? • How well connected is the community to the rest of the system? • What is the cost of re-establishing a new reputation of equal value to the one used in an attack?

  13. The Malicious Community • Aim: Repeatedly exploit or disrupt a benign agent or community of agents • Method: • The community of malicious agents (which may just be a single agent with a number pseudonyms) collude to provide : • Positive ratings to members • Negative ratings to non-members • Result: • The cost of a malicious agent re-establishing a new reputation is reduced i.e: Instant refill attack, Sybil attack, Community Weakening • The cost of running a service is dramatically increased for the benign agent. • What is then the value of group membership?

  14. Concluding Remarks • We present: • A classification of the attacks present in reputation enabled communities. • Aim: • Develop ‘intelligent’ malicious agents to test the performance of trust/reputation evaluation algorithms. • Identify how resistance to attacks can be quantified • Develop reputation mechanisms that are x-degree resistant to the attacks discussed Thank you for your attention All questions welcome

More Related