140 likes | 281 Views
Markov Decision Processes AIMA: 17.1, 17.2 (excluding 17.2.3), 17.3. Search, planning, MDP. Search. Actions are unreliable No hard goals Utility depends on entire environment history. Factorized state representations. Uncertainty & utility. MDP. Planning. Planning and MDPs.
E N D
Markov Decision ProcessesAIMA: 17.1, 17.2 (excluding 17.2.3), 17.3
Search, planning, MDP Search Actions are unreliable No hard goals Utility depends on entire environment history Factorized state representations Uncertainty & utility MDP Planning
Planning andMDPs • In addition to actions having costs, we might have goals with rewards, with the understanding that if you achieve a goal, you get the corresponding reward • So now, the objective of planning is to find a plan that has the highest net benefit measured as the difference between the cumulative reward for the goals achieved and the cumulative cost of the actions used • This problem is both easy (since an “empty” plan is a solution, just not a very good one) and hard (since now the “quality of the plan” in terms of its net benefit is more important) • On top of this, we might also want to say that rewards are not limited to just goals achieved in the final state, but can also be gathered for visiting certain good states on the way
Markov decision process • A sequential decision problem for a fully observable, stochastic environment with a Markovian transition model and additive rewards is called a MDP • A set of states • A set of actions in each state • A transition model • A reward function P(j | i, a) = probability of doing action a in i leads to j
What does a solution look like? • The solution should tell the optimal action to do in each state (called a “Policy”) • Policy is a function from states to actions • Not a sequence of actions anymore • because of the non-deterministic actions • If there are |S| states and |A| actions that we can do at each state, then there are |A||S| policies
Optimal policies depend on rewards R(s) = -0.04
Horizon & Policy • We said policy is a function from states to actions,but we sort of lied • Best policy is non-stationary, i.e.,depends on how long the agent has to “live” – which is called “horizon” • More generally, a policy is a mapping from <state, time-to-death> <action> • So, if we have a horizon of k, then we will have k policies • If the horizon is infinite, then policies must all be the same.. (So infinite horizon case is easy!)
Horizon & Policy Infinite horizon finite horizon, k=3 We will concentrate on infinite horizon problems In which the optimal policy is stationary
Stationary preferences • Preferences between state sequences are stationary: • If two sequences [s0,s1,s2,…] and [s0’,s1’,s2’,…] begin with the same state (s0=s0’), then the two sequences should be preference-ordered the same way as the sequences [s1,s2,…] and [s1’,s2’,…] • If you prefer future f1 to f2 starting tomorrow, you should prefer them the same way even if they start today
Utility of state sequence • Define utility of a sequence of states in terms of their rewards • Assume “stationarity” of preferences • Then, only two reasonable ways to define Utility of a sequence of states
The big picture Compute the optimal policy Compute the utilities of states
Utility of a state • The utility of a state is the expected utility of the state sequences that might follow it • The state sequences depend on the policy that is executed • If we let st be the state the agent is in after executing policy πfor t steps (note that stis a random variable), then we have • The true utility of a state is Different with R(s), The short-term reward The expected sum of discounted rewards if the agent executes an optimal policy.
Utility of a state The expected sum of discounted rewards if the agent executes an optimal policy.