310 likes | 510 Views
Trust in multi-agent systems. Sarvapali D. Ramchurn , Dong Huynh and Nicholas R. Jennings. Presented by: Zoheb H Borbora 10/18/2011. Prof. of Comp. Sci , Univ. of Southampton Heads - Agents, Interaction and Complexity Group Chief Scientific Advisor to UK govt. Chief Scientist
E N D
Trust in multi-agent systems Sarvapali D. Ramchurn, Dong Huynh and Nicholas R. Jennings Presented by: Zoheb H Borbora 10/18/2011
Prof. of Comp. Sci, Univ. of Southampton • Heads - Agents, Interaction and Complexity Group • Chief Scientific Advisor to UK govt. • Chief Scientist • Aroxo, aerogility • Founding Editor-in-Chief, IJAAMAS • Research Staff in Web and Internet Science, School of Electronics & Computer Science, Univ. of Southampton • Lecturer, Intelligent Agents and Multimedia Group, Univ. of Southampton • PhD (2001-2004) • Interests • trust and reputation • coalition formation • human-agent interaction • task allocation
Motivation • Open distributed systems can be modelled as multi-agent systems • Peer-to-peer computing • Semantic Web • Web services • E-business • Interactions form the core of MAS • Coordination • Collaboration • Negotiation
Motivation • Interaction problems in MAS • how to engineer protocols (or mechanisms) for multi-agent encounters? • How do agents decide who to interact with? • How do agents decide when to interact with each other? • Trust can minimize uncertainty associated with interactions in open distributed systems
Definitions of Trust • Dasgupta, 1998 Trust is a belief an agent has that the other party will • do what it says it will (being honest and reliable) • or reciprocate, given an opportunity to defect to get higher payoffs • Gambetta, 1998 Trust is the subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends • Castelfranchi & Falcone, 2001 Trust is the mental counter-part of delegation
Approaches to Trust Helps cope with uncertainty in beliefs Individual level trust System level trust Guides decision making to alleviate efficiency concerns Protocols and mechanisms to enforce trustworthiness of agents Belief about honesty and reciprocity of interaction partners.
Individual level trust • Learning and Evolving Trust • Learning and Evolving Strategies • Trust metrics • Reputation • Gather ratings • Aggregating ratings • Promote authentic ratings • Socio-cognitive models
Learning and Evolving Trust • Trust is an emergent property of direct interactions between self-interested agents • Cooperation • Defection
Learning and Evolving Strategies • Axelrod’s Tit-for-tat (1984) • Highest payoff with cooperative self-play • Less than maximum against selfish agent • Trust emerges as a result of evolution of strategies over multiple interactions (Wu & Sun, 2001) • Probabilistic reciprocity (Sen, 1996) • Collaborative liars do well with small no. of interactions • Reciprocative strategies perform well otherwise
Learning and Evolving Strategies • Drawbacks • Assume complete information for the algorithms to work • Not tested in real-life scenarios • Outcome of interactions assumes to be bistable • Cooperation • Defection
Learning and Evolving Strategies • Probabilistic reciprocity (Sen, 1996) Pr (agent k carries out task tijfor agent I while it is carrying its own task tkl) Takes into account Balance= Total savings – total cost Extra cost incurred in performing task Average cost of tasks performed till date • Set of agent behaviors • Philanthropic agents • Selfish agents • Reciprocativeagents • Individual agents
Learning and Evolving Strategies • Discussion • Is probabilistic reciprocity better than tit-for-tat?
Learning and Evolving Strategies • Drawbacks of tit-for-tat • Initial decision • First decision is crucial • Symmetrical interactions • In actual interactions, one agent incurs cost and the other gets benefit • Repetition of identical scenarios • Unlikely in real-life • Lack of a measure of work • Amount of benefit should be quanitifed Reciprocity: A Foundational Principle for Promoting Cooperative Behavior Among Self-Interested Agents, SandipSen, 1996
Trust Metrics • Witkowski, 2001 • objective trust based on past interactions • Trading scenario: leads to formation of strong, tight clusters of trading partners quickly • Trust builds trust, but unreliability breeds indifference • REGRET (Sabater and Sierra, 2002) • Considers three dimensions of reputation • Individual (direct interactions) • Social (group relation) • Ontological (combination)
Reputation Models • Sabater and Sierra, 2002 Reputation can be defined as the opinion or view of someone about something • Aspects of reputation • Gather ratings • Aggregating ratings • Promote authentic ratings
Reputation Models • Retrieving ratings from the “social network” • Referrals (Yu et al., 2000) • Enriched model with annotation of nodes (Schillo et al., 2000) • Degree of honesty • Degree of altruism • Takes into account trustworthiness of witnesses
Reputation Models • Aggregating ratings • Problems • simplistic aggregation can be unreliable • less people reporting bad experiences • no rating - good or bad? • open to manipulation by sellers • Dempster-Shafer theory of evidence (Yu and Singh,2002) • Allows combination of beliefs • Trustworthy, untrustworthy, unknown • Assumes witnesses are honest
Socio-cognitive Models • Based on “subjective” perception of opponent’s characteristics • Castelfranchi & Falcone, 2001 • Evaluation of trust for task delegation, based on decomposition into beliefs • Competence belief • Willingness belief • Persistence belief (stability) • Motivation belief
System-level trust • Design of protocols and mechanisms of interactions that foster trustworthy behavior • Truth-eliciting interaction protocols • No utility for lying agents • Reputation mechanisms • System maintained • Security mechanisms • authentication
Truth-eliciting interaction protocols • Single-sided auctions * Dominant strategy is lower valuation
Truth-eliciting interaction protocols • Secure Vickrey scheme (Hsu & Soo, 2002) • Bidders submit encrypted bids • Auctioneer selected randomly from bidders • Auctioneer can only view bid values • Collusion-proof mechanism (Brandt, 2001)
Reputation mechanisms • Modeling reputation at system level • Desiderata (Zacharia & Maes, 2000) • Identity change costly • New entrants not penalized with low score • Agents should be allowed to redeem themselves • High penalty for fake transactions • Reputed agents should have greater weight • Personalized evaluations • Recency of ratings
Security Mechanisms • Authentication of agents • Security requirements (Poslad et al, 2002) • Identity • Access permissions • Content integrity • Content privacy
Security Mechanisms • Mechanisms • Public key encryption • Security certificates • Public key models • PGP (decentralized) • X.509 (centralized) • These mechanisms do not enforce trustworthy behavior
Semantic Web vision (Berners-Lee et al, 2001): • Lucy and Pete have to organize a series of appointments to take their mother to the doctor for a series of physical therapy sessions • Fetch list of provider’s for treatment • Within 20 miles radius • Rating of good or excellent Lucy 2 Fetch prescribed treatment Authentication of Lucy’s agent 1 3 Health-care providers Find appropriate match of available times and Pete’s and Lucy’s schedule Doctor • trusted rating services based on reputation mechanisms • providers could bid based on secure mechanism • Decision choices in provider selection • Past interaction history • Reputation Pete
Open Issues • Strategic lying • Collusion detection • Context • Expectations • Social networks
Discussion • How does trust differ in a social network vs. multi-agent system? • What does it mean for a software agent to lie? • For e.g, lying about quality of goods sold • Is trust transitive? • Is it important to model distrust?