330 likes | 450 Views
DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks. CHEN Xiao W ei, Cheung Siu Ming CSE, CUHK May 15, 2008
E N D
DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks CHEN Xiao Wei, Cheung Siu Ming CSE, CUHK May 15, 2008 This talk is based on paper: Juan José Jaramillo and R. Srikant. DARWIN: Distributed and Adaptive Reputation Mechanism for Wireless Ad-hoc Networks. In Proc. of ACM 13th Annual International Conference on Mobile and Networking (MobiCom’07), Montreal, Canada, Sept. 2007
Outline • Introduction • Basic Game Theory Concepts • Network Model • Analysis of Prior Proposals • Trigger Strategies • Tit For Tat • Generous Tit For Tat • DARWIN • Contrite Tit For Tat • Definition • Performance Guarantees • Collusion Resistance • Algorithm Implementation • Simulations • Settings • Results • Conclusion & Comments
Introduction • Source communicates with distant destinations using intermediate nodes as relays • Cooperation: Nodes help relay packets for each other • In wireless networks, nodes can be selfish users that want to maximize their own welfare. • Incentive mechanisms are needed to enforce cooperation.
Introduction (Cont.) • Two types of incentive mechanisms: • Credit exchange systems: by payment • Reputation based systems: by neighbor's observation
Introduction (Cont.) • Main issue • Due to packet collisions and interference, sometimes cooperative nodes will be perceived as being selfish, which will trigger a retaliation situation • Contributions • Analyze prior reputation strategies’ robustness • Propose a new reputation strategy (DARWIN) and prove its robustness, collusion resistance and cooperation.
The Prisoners’ Dilemma Game • A Nash equilibrium is a strategy profile having the property that no player can benefit by unilaterally deviating from its strategy • Repeated Prisoner’s Dilemma Game • Total payoff function is the discounted sum of the stage payoffs:
Network Model • Assumptions • Nodes are selfish and rational, not malicious • Node operate in promiscuous mode • The value of a packet should be at least equal to the cost of the resources used to send it. (α≥1) • Assume any two neighbors have uniform network traffic demands. Thus, two player’s game. • Other Assumptions • Two nodes simultaneously decide whether to drop or forward their respective packets, and repeat game iteratively • Game time is divided into slots
Payoff Matrix Affine Transformation
Payoff Matrix (Cont.) • Define pe∈(0,1) to be the probability of a packet that has been forwarded was not overheard by the originating node. • Define to be the perceived dropping probability of node i’s neighbor at time slot k≥0 estimated by node i.
Payoff Function • is the average payoff at time slot k • Average discount average payoff of player i starting from time slot n is then given by
Trigger Strategies Define to be the dropping probability node i should use at time slot k according to strategy S. • N-step Trigger Strategy • If node i’s neighbor cooperates, then and then the optimal value of T=pe • Actually, pe is hard to perfectly estimated, so we have two cases: • If T<pe, cooperation will never emerge. • If T>pe, player –i will be perceived to be cooperative as long as it drops packets with probability • Full Cooperation is never the NE point with trigger strategies
Tit For Tat • Tit For Tat Strategy • Milan et al. proved that TFT does not provide the right incentive either for cooperation in wireless networks.
Generous Tit For Tat • Use a generosity factor g that allows cooperation to be restored. • GTFT strategy • GTFT is a robust strategy where no node can gain by deviating from the expected behavior, even if it cannot achieve full cooperation. • But according to the Corollary: If both nodes use GTFT the cooperation is achieved on the equilibrium path if and only if g=pe • So GTFT also needs a perfect estimate of pe
DARWIN • GOAL: propose a reputation strategy that does not depend on a perfect estimation of pe to achieve full cooperation • FOUNDATION: “Contrite Tit For Tat” strategy in iterated Prisoners’ Dilemma
Contrite Tit For Tat • Base on idea of contriteness • Player always in good standing on first stage • Player should cooperate if it is in bad standing, or if its opponent is in good standing • Otherwise, the player should defect
1 -1 DARWIN Note: Use historic information, e.g. qi(k-1) qi(k) acts as a measurement of bad standing Can you find the “Contrite Tit For Tat” idea?
Performance Guarantees • Theorem: Assume 1<γ<pe-1, DARWIN is subgame perfect if and only if • The problem: The exact value of pe is not known, so how do we decide γ? • Based on estimated pe:
Performance Guarantees • Estimated error probability: pe(e) = pe + Δ where –pe<Δ<1-pe • Substitute into previous equation: • For the assumption to be true (1<γ<pe-1) • Precise estimate of pe is not required Δ < 1 - pe
Performance Guarantees • LEMMA: If both nodes use DARWIN then cooperation is achieved on the equilibrium path. That is, • pi(k) = p-i(k) = 0 for all k>=0
Collusion Resistance • Define to be the discounted average payoff of player i using strategy Si when it plays against player –i using strategy S-i • Define ps∈(0,1) to be the probability that a node that implements DARWIN interacts with a colluding node.
Collusion Resistance (Cont.) • Then we can get the average payoff to a cooperative node • Similarly, the average payoff to a colluding node interacts with a node implementing DARWIN
Collusion Resistance (Cont.) • The average payoff is bounded by • A group of colluding nodes cannot gain from unilaterally deviating if and only if U(S) < U(D), that is
Collusion Resistance (Cont.) • Define strategy S to be a sucker strategy if
Algorithm Implementation • The number of messages sent to j for forwarding • The number of messages j actually forwarded • denotes connectivity, which is the forwarding ratio • Then j’s average connectivity ratio is
Algorithm Implementation (Cont.) • Define • Use equation (6) and (7) to find the dropping probability • To meet We estimate pe as , which is the fraction of time at least one node different from j transmits
Simulations • Settings • ns-2 • Dynamic Source Routing (DSR) Protocol • Area: 670 x 670m2 • 50 nodes randomly placed, some are selfish • 14 source-destination pairs • packet size is 512 byte • Simulation time is 800s, time slot is 60s • γ=2
Simulations • Normalized forwarding ratio • Fraction of forwarded packets in the network under consideration divided by fraction of forwarded packets in a network with no selfish nodes • Objective • Find how normalized forwarding ratios for both cooperative and selfish nodes vary with: • Dropping probability of selfish nodes • Source rate • Percentage of selfish nodes
Simulations • Normalized forwarding ratio for different dropping ratio of selfish nodes 5 selfish nodes 2 packets/s
Simulations • Normalized forwarding ratio for different source rates 5 selfish nodes 100% dropping ratio for selfish nodes
Simulations • Normalized forwarding ratio for different number of selfish nodes 2 packets/s 100% dropping ratio for selfish nodes Key pt.: Selfishness does not improve performance Nodes are rational
Conclusion • Studied how reputation-based mechanisms help cooperation emerge among selfish users • Showed properties of previously proposed schemes • Proposed new mechanism called DARWIN • DARWIN is • Robust to imperfect measurements (pe) • Collusion-resistant • Able to achieve full cooperation (LEMMA) • Insensitive to parameter choices
Comments • Contribution: Apply CTFT to Wireless Ad-hoc Networks • Reliable as long as assumptions hold • Assumed nodes do not lie about perceived dropping probability • Liars can get better payoffs! • Assumed nodes are rational • Only the previous stage is considered • Normalized forward rates, but not payoff, is shown in simulation results.