250 likes | 349 Views
MIxED sTraTegIES. A 1. a 3. A 2. a 1. a 2. 2.4 Mixed Strategies. When there is no saddle point: We’ll think of playing the game repeatedly.
E N D
MIxED sTraTegIES A1 a3 A2 a1 a2
2.4 Mixed Strategies • When there is no saddle point: • We’ll think of playing the game repeatedly. • We continue to assume that the players use the same basic philosophy and principles as before (that is, to find the “safest” strategy that best protect yourself from loosing). • However we now assume that the players can mix up the strategies that they use. • Let Player I play strategy aiwith probability xi and • let Player II play strategy Ajwith probability yj
2.4 Mixed Strategies Relative frequency of application Player II Player I
Example • Since x1, x2, y1 and y2 are probabilities • x1 + x2 = 1 and y1 + y2 = 1 • One possibility would be for Player I to use a13/4 of the time and a21/4 of the time whilst • Player II to uses A11/2 of the time and A2 1/2 of the time. • BUT, WHAT IS THE BEST COMBO ???
2.4.1 Definition • A mixed strategy for Player I is a vector x = (x1,... ,xm) with xi≥ 0 for all i and i xi = 1. • Similarly, a mixed strategy for Player II is a vector y = (y1,... ,yn) with yj ≥ 0 for all j and j yj = 1. • A pure strategy is a vector x, where one component is 1 and all other components are 0. eg. (0, 0, 1, 0, 0). • So if a person uses a pure strategy they play the same option all the time. (This is what we do when there is a saddle.)
Expected Payoff • Let E(x,y) denote the expected value of the payoff to Player I given that she uses strategy x and Player II uses strategy y. By definition then, • E(x,y) := i,j xi yj vij = xVy • (Convention: in xVy, x is a row vector, y is a column vector, and V is a matrix.) • If we play the game repeatedly many times Player 1 expects to get E(x, y) on average.
2.4.2 Example • No saddle. • E(x,y) = xVy = (x1,x2) V (y1,y2) = (x1, x2)(y1 + 5y2, 6y1 + 2y2) = x1y1 + 5x1y2 + 6x2y1 + 2x2y2 • For x = (0.5, 0.5) we obtain E(x,y) = 3.5(y1 + y2) = 3.5 for any y, since y1 + y2 = 1. • Note that this is better than the optimal security level for Player I (equal to 2) !!!! • BUT, CAN WE DO BETTER?
Similarly, if y = (0.5, 0.5), we have E(x,y) = 3x1 + 4x2< 4x1 + 4x2 = 4 (why?) • Note that the optimal security level for Player II is equal to 5 (for pure strategies). Thus, this mixed strategy is (on average) superior to any pure strategy that Player II can use.
Notation • S := Set of feasible mixed strategies for Player I, ie. S:={(x1,... ,xm): xi ≥ 0, Si xi = 1} • T:= Set of feasible mixed strategies for Player II, ie. T:={(y1,... ,yn): yj ≥ 0, Sj yi = 1} • Our aim is to choose “the best” of all the elements in S and T.
2.4.3 Definition • The security level of Player I associated with strategy x in S is the minimum feasible expected payoff to Player I given that she uses x (and that Player II is doing sensible things, that is, y in T). We denote the security level for Player I s(x), ie. s(x):= min {E(x,y): y in T}. Similarly, for Player II, let (y) denote the security level associated with strategy y in T, namely (y):= max{E(x,y): x in S}.
1.4.1 Theorem • There exists the following equalities: s(x) = min{xV.j : j=1,2,...,n} and (y) = max{Vi . y: i=1,2,...,m} • In words, if Player I is using a given strategy x, Player II can restrict herself to pure strategies !!!! • If Player II is using a given strategy y, Player I can restrict himself to pure strategies!!! • An LP-based proof • (See Lecture Notes for an alternative direct proof. )
LP based Proof • Since by definition E(x,y) = xVy, we have s(x) = min {xVy: y in T} Let c: = xV, then s(x) = min {cy : y in T} = min {cy: y1 + ... + yn = 1, yj ≥ 0 } This is a LP problem with one functional constraint. Thus, a basic feasible solution is of the form y=(0, 0, ..., 0, 1, 0, 0, ... 0), which is in fact a pure strategy! • Similarly for (y). • Q: What is the value of j for which yj = 1 ? • A: The j that has the least cj value!
2.4.5 Definition • The optimal security level for Player I is equal to • v1 := max {s(x): x in S} • and the optimal security level for Player II is equal to • v2 := min {(y): y in T} • If v1 = v2 we call the common quantity the value of the game.
Example 2.4.2 (Continued) • v1 = max {s(x): x in S} = max { min{xV.j : j = 1,2,...,n}: x in S} (using Theorem 1.4.1) = max {min{xV.1, xV.2}: x in S} = max {min{x1 + 6x2, 5x1 + 2x2} x in S} = max {min{x1 + 6 – 6x1, 5x1 + 2 – 2x1}: 0 ≤ x1 ≤ 1} (using x1 + x2 = 1 ) = max {min{–5x1 + 6, 3x1 + 2}: 0 ≤ x1 ≤ 1}
–5x1 + 6 6 3 x1 + 2 5 4 3 2 1 x1 0 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
–5 x1 + 6 6 3 x1 + 2 5 4 3 2 1 x1 0 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1 minimum of the two lines
– 5 x1 + 6 6 3x1 + 2 5 4 3 2 1 x1 0 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1 • maximum of the minimum function here
–5x1 + 6 6 3x1 + 2 5 4 3 2 1 x1 0 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1 v1 = max {min{–5x1 + 6, 3x1 + 2}: 0 ≤ x1 ≤ 1}.x*1 = 1/2; x*2 = 1 – x*1 = 1/2; v1 = 3x*1 + 2 = 7/2.
For Player II • v2 = min{(y): y in T} • = min{ max {Vi . y: i = 1,...,m}: y in T} • = min { max {y1 + 5y2 , 6y1 + 2y2}: y in T} • = min { max {–4y1 + 5, 4y1 + 2}: 0 ≤ y1 ≤ 1}
–4y1 + 5 4y1 + 2 6 5 4 3 2 1 y1 0 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
Max of two lines 4y1 + 2 6 –4y1 + 5 5 4 3 2 1 y1 0 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1
–4y1 + 5 4y1 + 2 6 5 4 3 2 1 y1 0 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1 v2= min { max {–4y1 + 5, 4y1 + 2}:0 ≤ y1 ≤ 1}y*1 = 3/8; y*2 = 1 – y*1 = 5/8; v2=7/2. Min of max function
Is this solution stable? • Let us see if Player I is happy with x* given that Player II is using y*. • For any feasible strategy x for Player I we thus have: • xVy* = (7/2)(x1 + x2) = 7/2 for all x in S. • Thus, given that Player II is using y*, Player I will be happy with x*, in fact she will be as happy with any feasible strategy. • Convince yourself that Player II is happy with y* given that Player I is using x*.
1.4.4 Definition • A strategy pair (x*,y*) in S T is said to be in equilibrium if xVy* ≤ x*Vy* ≤ x*Vy for all (x,y) in S T. • Fundamental questions: • Do we always have such pairs? • How do we construct such pairs if they exist?
Example • Solve the two person zero sum game whose payoff matrix is • See lecture for solution.