270 likes | 286 Views
Model Checking Nash Equilibria in MAD Distributed Systems. FMCAD 2008 Formal Methods in Computer Aided Design Portland, OR, USA November 17 – 20, 2008. An Administrative Domain. … and its Administrator. SAD Distributed Systems.
E N D
Model Checking Nash Equilibria in MAD Distributed Systems FMCAD 2008Formal Methods in Computer Aided DesignPortland, OR, USANovember 17 – 20, 2008
An Administrative Domain … and its Administrator SAD Distributed Systems In a Single Administrative Domain (SAD) Distributed System all nodes belong to the same administrative domain.
Administrative Domains … … and their Administrators MAD Distributed Systems In a Multiple Administrative Domain (MAD) Distributed System each node owns its resources.
Examples of MAD Systems Internet Routing (e.g., each router is an administrative domain) Wireless Mesh Routing (e.g., each node is an administrative domain) File Distribution (e.g., each PC is an administrative domain) Cooperative Backup (e.g., each PC is an administrative domain) Archival Storage (e.g., each PC is an administrative domain (e.g. http://www.oracorp.com/Research/p2pStorage.html)
Node Behaviours in SAD Systems Altruistic (or correct or obedient) nodes, that is nodes faithfully following the proposed protocol Byzantine nodes, that is nodes that may arbitrarily deviate from the proposed protocol, for example, because of: hardware failure software failures or malicious attacks.
SAD Correctness A protocol P for a SAD Distributed system S is expected to tolerate up to f byzantine nodes. Thus typically correctness for SAD systems is a statement of the form: Protocol P for system S satisfies property φ as long as there are no more than f byzantine nodes in S.
Node Behaviours in MAD Systems Byzantine nodes (as in SAD). Altruistic nodes (as in SAD). Rational (or selfish)nodes. That is, nodes whose administrators are selfishly intent on maximizing their own benefits from participating in the system. Rational nodes may change arbitrarily the protocol if that is at their advantage. In particular rational nodes may change their hardware or software if that is at there advantage.
MAD Correctness (1) Problem In a MAD system any node may behave selfishly. This rules out the classical approach of showing that a given property holds when there are no more than f byzantine nodes. Solution Show BAR (Byzantine, Altruistic, Rational) tolerance. Namely, a protocol is BAR tolerant if it guarantees the desired property despite the presence of Byzantine and rational players.
MAD Correctness (2) Sufficient to show the following: • Show correctness when there are only Byzantine and Altruistic players • Show that no rational node has an interest in deviating from the proposed protocol. Note that: Point 1 above is SAD correctness (and can be done using well know model checking techniques) Point 2 above amounts to show that the proposed protocol is a Nash equilibrium ... in a suitable sense. This is our focus here.
Outline • Formal definition of Proposed Protocol and Mechanism. • Formal definition of Nash equilibrium for mechanisms. • Symbolic algorithm verifying that a given proposed protocol is a Nash equilibrium for a given mechanism. • Experimental results showing feasibility of proposed approach.
Mechanism An n player mechanism M is a tuple <S, I, A, T, B, h, > such that • States: S = <S1, ... Sn> • Initial states: I = <I1, ... In> • Actions: A = <A1, ... An> • Underlying (Byzantine) behavior: B = <B1, ... Bn>, Bi : S × Ai× SiBoole s.t. No deadlock: s ai si’ s.t. Bi (s, ai, si’) Deterministic: Bi (s, ai, si’) Bi (s, ai, si’’) (si’ = si’’) • Proposed Protocol: T = <T1, ... Tn>, Ti : S × AiBoole s.t.: Realizability: Ti (s, ai) s’ Bi (s, ai, si’) Nonblocking: s ai Ti (s, ai) • Reward: h = <h1, ... hn> with hi : S × A • Discount: = < 1, ... n> with i (0, 1)
Mechanism Transition Relation Let Z {1, ... n } (Byzantine agents) Byzantine Agent Bi (s, ai, si’) if i Z BTi (Z, s, ai, si’) = Bi (s, ai, si’) Ti(s, ai) if i Z Altruistic Agent Agents move synchronously: BT(Z, s, a, s’) = BT1 (Z, s, a1, s1’) … BTn (Z, s, an, sn’)
i = 0.5 … Working has a cost hi(s, <a-i, work>) = -1 … No cost for sleeping or resetting hi(s, <a-i, sleep>) = hi(s, <a-i, reset>) = 0 4 if s = <1, 1, … 1> hi(s, <a-i, work>) = … Reward only if everyone worked 0 otherwise Example of Mechanism gain sleep 2 0 1 Ti : * Bi : work reset
M sleep/0 gain/4- 0 2 0 1 Ti : * Bi : work/-1 reset/0 Paths in Mechanisms A path in (M, Z) is a finite or infinite sequence = s(0) a(0) s(1) a(1) … s(t) a(t) s(t+1) … s.t. BT(Z, s(t),a(t), s(t+1)) holds. Let n = 2 and Z = {1}. An example of (M, Z) path is: = <0, 0> <sleep, work> <2,1> <reset, gain> <0, 0> <work, work> <1, 1> <gain, gain> <0, 0>… Value of path for agent i: vi() = t=0…i t hi(s(t),a(t)) Value of path for agent 1: v1() = t=0…31 t h1(s(t),a(t)) = 1*0 + 0.5*0 - 0.25*1+ 0.125*4 = 0.25 Value of path for agent 2: v2() = t=0…32 t h2(s(t),a(t)) = -1*1 + 0.5*0 - 0.25*1+ 0.125*4 = -0.75
M sleep/0 gain/4- 0 2 0 1 Ti : * Bi : work/-1 reset/0 = < sleep, reset, work > = <0, 0> <sleep, work> <2,1> <reset, gain> <0, 0> <work, work> Strategies A strategy is a finite or infinite sequence of local actions for a given player. For example, = <sleep, reset, work, gain> is a strategy for player 1. A strategy for player i agrees with (is associated to) path iff where n = 2 and Z = {1}.
M sleep/0 gain/{4, 0} 2 0 1 Ti : * Bi : work/-1 reset/0 Value of a Strategy The valueof strategy in state s for player i, vi(Z, s, ), is the minimum value of paths (with the same lenght of ) that agree with . That is: vi(Z, s, ) = min {vi() | is an (M, Z) path that agrees with strategy of agent i} In other words, we are assuming that all other players will play against i (pessimistic view). Namely, they will try to minimize i gain. For example, let = <work, gain>, then: v1(, <0, 0>, ) = v1({1}, <0, 0>, ) = -1*1 + 0.5*4 = 1 v1({2}, <0, 0>, ) = v1({1, 2}, <0, 0>, ) = -1*1 + 0.5*0 = -1
M sleep/0 gain/{4, 0} 2 0 1 Ti : * Bi : work/-1 reset/0 Value of a State The value of states at horizon k for player i, vik(Z, s), is the value of the best strategy of length k for i starting at s. That is: vik(Z, s), = max{vi(Z, s, ) | is a strategy of length k for agent i} For example: v12(, <0, 0>) = v12 ({1}, <0, 0>) = -1*1 + 0.5*4 = 1 (witness: <work, gain>) v12 ({2}, <0, 0>) = -1*1 + 0.5*0 = -1 (witness: <work, gain>) v12 ({1, 2}, <0, 0>) = 1*0 + 0.5*0 = 0 (witness: <sleep, reset>)
M sleep/0 gain/{4, 0} 2 0 1 Ti : * Bi : work/-1 reset/0 Worst Case Value of a State The worst casevalue of states at horizon k for player i, uik(Z, s), is the value of the worst strategy of length k for i starting at s. That is: uik(Z, s), = min{vi(Z, s, ) | is a strategy of length k for agent i} For example: u12(, <0, 0>) = -1*1 + 0.5*4 = 1 (witness: <work, gain>) u12 ({1}, <0, 0>) = 1*0 + 0.5*0 = 0 (witness: <sleep, reset>) u12 ({2}, <0, 0>) = u12({1, 2}, <0, 0>) = -1*1 + 0.5*0 = -1 (witness: <work, gain>) We omit suprscript k when k = .
M sleep/0 gain/{4, 0} 2 0 1 Ti : * Bi : work/-1 reset/0 Sketch a s1 s b s2 vik (Z,s) = max{vi(Z, s, ) | is a strategy of length k for agent i} = max {h(s, a) + V(Z, s1), h(s, b) + V(Z, s2)} Computing Values of States Proposition The value of states at horizon k for player i, vik(Z, s) can be computed using a dynamic programming appraoch. The worst casevalue of states at horizon k for player i, uik(Z, s) can be computed using a dynamic programming appraoch.
Nash Intuitively, a mechanism M is -f-Nash if, as long as the number of Byzantine agents is no more than f, no rational agent has an interest greater than in deviating from the proposed protocol. Definition. Let M be an n player mechanism, f {0, 1, ... n} and > 0. • M is -f-Nash for player i if Z Pf([n] - {i}) s I, ui(Z, s) + vi(Z {i}, s) • M is -f-Nash if it is -f-Nash for each player i [n]. Pf(Q) = subsets of Q of size at most f.
Finite and Infinite Paths i = 0.5 b/0 Bi : d/3 h/3 2 1 0 4 5 3 Ti : e/-3 a/-1 c/1 f/-1 g/-3 Strategy: a d e d …… u1k(, 0) = -1 + (3/2) – (3/4) + (3/8) – … = -1 + (3/2) t=0, .. k-2(-1/2)t = (-1)k/(1/2k-1) Strategy: a (d e) v1k({1}, 0) = 1/(1/2k-1) Strategy: a (d e) when k is even c (g h) when k is odd Thus: if k is odd then u1k(, 0) < v1k({1}, 0) if k is even then u1k(, 0) = v1k({1}, 0) Thus there is no k > 0 s.t. for all k > k, u1k(, 0) v1k({1}, 0) In other words, although the above mechanism is -0-Nash, there is no k > 0 s.t. the -0-Nash property can be proved by only looking at a finite prefix of length at most k.
Main Theorem Let M be an n player mechanism, f{0, 1, ... n}, > 0 and > 0. Furthermore, for each agent i let: • Mi = max{ |hi(s, a)| | s S and a A} • Ei(k) = 5 ikMi/(1 - i) • i(k) = max{ vik(Z {i}, s) – uik(Z, s) | s I and Z Pf([n] – {i}) } • 1(i, k) = i(k) - 2Ei(k) • 2(i, k) = i(k) + 2Ei(k) For each agent i let ki be s.t. 4 Ei(ki) < . Then: • If i [2(i, ki)> 0] then M is -f-Nash. • If i[0 < 1(i, ki)] then M is not -f-Nash. • Otherwise M is ( + )-f-Nash. Not -f-Nash ( + )-f-Nash -f-Nash. 1(i, k) 2(i, k) Proof idea: By computing an upper bound to the error we make by only considering paths of length up to k.
Symbolic Algorithm for i = 1, ... n do Let k s.t. 4 Ei(k) < ; Let b := <b1, ... bn> and Ci(b) := [j=1, ...n, ij bj f] vi0(b, s) := 0, ui0(b, s) := 0; fort = 1, ... kdo vit(b, s) := max{min {hi(s, <ai, a-i>) + i vit-1(s’, b) | BT(b[bi := 1]), s, <ai, a-i>, s’) Ci(b) a-i A-i} | ai Ai}; uit(b, s) := min{min {hi(s, <ai, a-i>) + i uit-1(s’, b) | BT(b[bi := 0]), s, <ai, a-i>, s’) Ci(b) a-i A-i} | ai Ai}; i := max{vik(s, b) - uik(s, b) | Init(s) Ci(b)} 1(i) := i – 2Ei(k); 2(i) := i + 2Ei(k); if ( < 1(i)) return (FAIL) if ( i (2(i) < )) return (PASS with ) elsereturn (PASS with ( + ))
Jobs J-(q–1) J-0 J-1 . . . . . Tasks T-(q–1) T-0 T-1 T-2 . . . . . Experimental Results (1) Agent 1task sequence: 0, 1, ... q-1 Agent 2task sequence: 1, 2, 3 ... q-1, 0 Agent 3task sequence: 2, 3, 4... q-1, 0, 1 An agent incurs a cost by working towards the completion of its currently assigned task. Once an agent has completed a task it waits for its reward (if any) before it considers working to the next task in its sequence. As soon as an agent receives its reward it considers working to the next task in its list. A job is completed if for each task it needs there exists at least one agent that has completed that task. In such a case each of such agents receive a reward. Note that even if two or more agents have completed the same task all of them get a reward.
Experimental Results on a 64-bit Dual Quad Core 3GHz Intel Xeon Linux PC with 8GB of RAM
Conclusions We presented: • A Formal definition of Proposed Protocol and Mechanism. • A Formal definition of Nash equilibrium for mechanisms. • A symbolic algorithm verifying that a given proposed protocol is a Nash equilibrium for a given mechanism. • Experimental results showing feasibility of our approach.