1 / 9

Markov Decision Processes AIMA: 17.1, 17.2 (excluding 17.2.3), 17.3

Markov Decision Processes AIMA: 17.1, 17.2 (excluding 17.2.3), 17.3. From utility to optimal policy. The utility function U(s) allows the agent to select the action that maximizes the expected utility of the subsequent state:. The Bellman equation.

Download Presentation

Markov Decision Processes AIMA: 17.1, 17.2 (excluding 17.2.3), 17.3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Markov Decision ProcessesAIMA: 17.1, 17.2 (excluding 17.2.3), 17.3

  2. From utility to optimal policy • The utility function U(s) allows the agent to select the action that maximizes the expected utility of the subsequent state:

  3. The Bellman equation • Now, if the utility of a state is the expected sum of discounted rewards from that point onwards, then there is a direct relationship between the utility of a state and the utility of its neighbors: The utility of a state is the immediate reward for that state plus the expected discounted utility of the next state, assuming that the agent chooses the optimal action the Bellman equation

  4. The Bellman equation

  5. The value iteration algorithm • For problem with n states, there are n Bellman equations, and n unknowns, however NOT linear • Start with random U(s), update iteratively • Guaranteed to converge to the unique solution Demo: http://people.cs.ubc.ca/~poole/demos/mdp/vi.html

  6. Policy iteration algorithm • It is possible to get an optimal policy even when the utility function estimate is inaccurate • If one action is clearly better than all others, then the exact magnitude of the utilities on the states involved need not be precise Compute utilities of states Compute utilities of states for a given policy Compute policy for the given state utilities Compute optimal policy Policy iteration Value iteration

  7. Policy iteration algorithm Linear equation

  8. Policy evaluation • n linear equations, n unknowns for problem with n states, solved in n cubic time, can also use iterative scheme

  9. Summary • Markov decision processes • Utility of state sequence • Utility of states • Value iteration algorithm • Policy iteration algorithm

More Related