470 likes | 594 Views
Games where you can play optimally without any memory. Authors: Hugo Gimbert and Wieslaw Zielonka. Presented by Moria Abadi. Arena and Play. Play. Max. Min. color(play) = blue blue yellow …. Payoff Mapping of Player. means that y is good for the player at least as x.
E N D
Games where you can play optimally without any memory Authors:Hugo Gimbert and Wieslaw Zielonka Presented by Moria Abadi
Arena and Play Play Max Min color(play) = blue blue yellow …
Payoff Mapping of Player means that y is good for the player at least as x Player wins payoff u(x) in play x
Example 1 – Parity Game Max wins 1 if the highest color visited infinitely often is odd, otherwise his payoff is 0
Example 2 – Sup Game Max wins the highest value seen during the play
Example 4 – Mean Payoff Game Does not always exist
Example 4 – Mean Payoff Game 1 1 1 1 2 0 1 0 1 0 0 0 0 1
Preference Relation of Player iscomplete preorder relation on C x ymeans y is good for the player at least as x x y denotes x y but not y x u induces : x y iff u(x)≤u(y)
Antagonistic Games • x -1 y iff y x • is preference relation of Max • -1is preference relation of Min
Games, Strategies • Game (G,) • G is finite arena G = (SMax, SMin, E) • is a preference relation for player Max • strategy for Max • strategy for Min • pG(t,,) is a play in G with source t consistent with both and .
pG(t,#,#) is a play # and #are optimal if: For Max and Min it is not worth to exchange his strategy unilaterally Optimal Strategies Intuition
Optimal Strategies Definition (G,) is given # and #are optimal if For all states s and all strategies and
The Main Question Under which conditions Max and Min have optimal memoryless strategies for all games? Some conditions on will be defined Min and Max have optimal memoryless strategies iff satisfies these conditions Parity games, mean payoff games,…
[L] Rec(C) all languages recognizable by automata LRec(C) Pref(L) all prefixes of the words in L [L]={ | every finite prefix of x is in Pref(L)}
Lemma 3 [L M] = [L] [M] xPref(M), xPref(L) xPref(L), xPref(M)
Co-accessible Automaton • From any state there is a (possibly empty) path to a final state C={0,1} 0 1 0 i 0 1 1 0 1
Lemma 4 • Let A=(Q,i,F,Δ) be a co-accessible finite automaton recognizing a language L. Then [L]={color(p) | p is an infinite path in A, source(p)=i} 0 1 0 i 0 1 1 0 1 p=e0e1e2… n there is a path from target(en) to a final state color(p)[L]
Lemma 4 • Let A=(Q,I,F,Δ) be a co-accessible finite automaton recognizing a language L. Then [L]={color(p) | p is an infinite path in A, source(p)=i} 0 1 0 i 0 1 1 0 1 x=c0c1c2… There is an infinite path p: color(p)=x n there is a path matching c0…cn
Extension of and For X,YC XY iff xX yY, xy XY iff yY xX, xy
Monotony • is monotone if M,NRec(C) xC* [xM] [xN] yC* [yM] [yN] Intuitively: at each moment during the play the optimal choice between two possible futures does not depend on the preceding finite play M x y N
Example of non-monotone C=R 1 1 1 1 2 y: v=20 0 1 0 0 0 0 1 w=1 x: u(xv) = 2/5, u(xw) = 1, u(yv) = 6/5, u(yw) = 1 u(xv)<u(xw) while u(yw)<u(yv)
Selectivity • is selective if xC* M,N,KRec(C) [x(MN)*K] [xM*] [xN*] [xK] Intuitively: the player cannot improve his payoff by switching between different behaviors M N K
Example of non-selective C={0,1} 1 if the colors 0 and 1 occur infinitely often 0 otherwise M = {1k | 0≤k} 1 0 N = {0k | 0≤k} (01) [(MN)*] [M*] = {1} [N*] = {0} u((01) > u(1) and u((01) > u(0)
The Main Theorem Given a preference relation , both players have optimal memoryless strategies for all games (G,) over finite arenas G if and only if the relations and -1 are monotone and selective
Proof of Necessary Condition Given a preference relation , if both players have optimal memoryless strategies for all games (G,) over finite arenas G then the relations and -1 are monotone and selective
Simplification 1 A, , # B, -1, # SB SA Max Min SA SB A, ,# B, -1,# It is enough to prove only for
Simplification 2 • It turns out that already for one-player games if Max has optimal strategy, has to be monotone and selective Two-player arenas One-player arenas
Lemma 5 Suppose that player Max has optimal memoryless strategies for all games (G,) over finite one-player arenas G=(SMax,Ø,E). Then is monotone and selective.
Prove of Monotony x,yC* and M,NRec(C) and [xM] [xN] We shall prove [yM] [yN] • Ax and Ay are deterministic co-accessible automata recognizing {x} and {y} • AN and AM are co-accessible automata recognizing N and M • W.l.o.g. AN and AM have no transition with initial state as a target
Prove of Monotony x,yC* and M,NRec(C) and [xM] [xN] [yM] [yN] If [M] = Ø – trivial. [M] Ø and [N] Ø by Lemma 4 there is an infinite path from initial state of AM and AN
Automaton A Ax Ay Recognizes x(MN) i F F All plays are [x(MN)] t AN AM i =[xM][xN] i F F
p play consistent with # x,yC* and M,NRec(C) and [xM] [xN] [yM] [yN] Ax Ay i i q play consistent with # t color(q)[yN], AN AM [yM][yN] color(q) F F [yM] [yN]
Proof of Sufficient Condition Given a monotone and selective preference relations and -1, both players have optimal memoryless strategies for all games (G,) over finite arenas G.
Arena Number • G=(S,E) • nG = |E|-|S| • Each state has at least one outgoing transition nG0 • The proof by induction on nG
Induction Basis For arena G, where nG=0. strategies are unique Hypothesis Let G be an arena and is monotone and selective. Suppose Max and Min have memoryless strategies in all games (H,) over arenas H such that nH<nG. Then Max has optimal memoryless strategy in (G,).
# • We need to find # such that (#,#) optimal • We will find #m which requires memory such that (#, #m) optimal • Permuting Max and Min we will find (#m, #) optimal • (#, #m) and (#m, #) are optimal (#,#) optimal
Induction Step G G0 t G1 (#i, #i) – optimal strategies in Gi
Induction Step G G0 Ki colors of finite plays from in Gi from t consistent with #i t G1 KiRec(C), monotone xC* [xK0] [xK1] or xC* [xK1] [xK0] W.l.o.gxC* [xK1] [xK0] So let # = #0
# G G0 t G1 #0(target(p)) if last transition from t was to G0 #1(target(p)) if last transition from t was to G1
color(pG(s,,#))color(pG(s,#,#))color(pG(s,#,))color(pG(s,,#))color(pG(s,#,#))color(pG(s,#,)) G G0 t G1
color(pG(s,#,#))color(pG(s,#,)) G G0 t G1 All plays are in G0
color(pG(s,,#))color(pG(s,#,#)) G G0 t G1 pG(s,,#) traverse the state t All plays are in G0
color(pG(s,,#))color(pG(s,#,#)) G G0 Mi colors of finite plays from in Gi from t to t consistent with #i t G1 x - color of the shortest path to t consistent with # color(pG(s,,#)) [x(M0M1)*(K0K1)] [x(M0)*] [x(M1)*][x(K0K1)] (Mi*)Ki color(pG(s, ,#)) [x(K0K1)] = [xK0][xK1] [xK0]
color(pG(s,,#))color(pG(s,#,#)) G G0 t G1 color(pG(s, ,#)) [xK0] color(pG0(s,#0,#0)) = color(pG(s,#,#))
A Very Important Corollary Suppose that is such that for each finite arena G=(SMax,SMin,E) controlled by one player (SMax=Ø or SMin=Ø), this player has an optimal memoryless strategy in (G,). Then for all finite two-player arenas G both players have optimal memoryless strategies in the games (G,).