270 likes | 462 Views
CPSC 7373: Artificial Intelligence Lecture 10: Planning with Uncertainty. Jiang Bian, Fall 2012 University of Arkansas at Little Rock. Planning under Uncertainty. Uncertainty. Planning. MDPs PO-MDPs. RL. Learning. Planning Agent Tasks Characteristics.
E N D
CPSC 7373: Artificial IntelligenceLecture 10: Planning with Uncertainty Jiang Bian, Fall 2012 University of Arkansas at Little Rock
Planning under Uncertainty Uncertainty Planning MDPs PO-MDPs RL Learning
Planning Agent Tasks Characteristics • Stochastic is an environment where the outcome of an action is somewhat random, whereas an environment that's deterministic where the outcome of an action is predictable and always the same. • An environment is fully observable if you can see the state of the environment which means if you can make all decisions based on the momentary sensory input. Whereas if you need memory, it's partially observable.
Markov Decision Process (MDP) a2 a2 S2 S2 50% a1 a1 S1 S1 50% a1 a1 a2 a2 S3 S3 a1 a1 a2 a2 Randomness Finite State Machine Markov Decision Process
Markov Decision Process (MDP) a2 S2 States: S1…Sn Actions: a1…ak State Transition Matrix T(S, a, S’) = P(S’|a,S) Reward function R(S) 50% a1 S1 50% a1 a2 S3 a1 a2
MDP Grid World 1 2 3 4 a Absorbing States b c Stochastic actions: 10% 10% 80% 80% 10% 10% Policy: pi(s) -> A The planning problem we have becomes one of finding the optimal policy
Stochastic Environments – Conventional Planning 1 2 3 4 c1 a b S N W E c b1 c1 c1 c2 • Problems: • Branching factor: 4 choices, 3 outcomes, at least 12 branches we need to follow • Depth of the search tree (i.e., loops, etc.) • Many states visited more than once (i.e., states may re-occur) • In A*, we ensure we only visit each state only once
Policy 1 2 3 4 a b c Goal: Find an optimal policy for all these states that with maximum probability leads me to the absorbing state plus 100 Quiz: What is the optimal action? a1: N, S, W, E ??? c1: N, S, W, E ??? c4: N, S, W, E ??? B3: N, S, W, E ???
MDP and Costs 1 2 3 4 a b c +100, a4 -100, b4 -3, other states (i.e., gives us incentives to shorten our action sequence) R(s) Objective of MDP: γ=discount factor, e.g., γ=0.9, (i.e., decay of the future rewards)
Value Function 1 2 3 4 a b c Value Function: for each state, the value of the state is the expected sum of future discounted reward provided that we start in state S, executed policy PI. Planning = Iteratively calculating value functions
Value Iteration 1 2 3 4 a b c run value iteration through convergence 1 2 3 4 a b c
Value Iteration - 2 1 2 3 4 a b c Back-up equation: If S is the terminal state After converges (Bellman equality): the optimal future cost reward trade off that you can achieve if you act optimally in any given state.
Quiz – DETERMINSTIC 1 2 3 4 DETERMINSTIC, γ= 1, R(S) = -3 a b c V(a3) = ???
Quiz - 1 1 2 3 4 DETERMINSTIC, γ= 1, R(S) = -3 a b c V(a3) = 97 V(b3) = ???
Quiz - 1 1 2 3 4 DETERMINSTIC, γ= 1, R(S) = -3 a b c V(a3) = 97 V(b3) = 94 V(c1) = ???
Quiz - 1 1 2 3 4 DETERMINSTIC, γ= 1, R(S) = -3 a b c V(a3) = 97 V(b3) = 94 V(c1) = 85
Quiz – STOCHASTIC 1 2 3 4 STOCHASTIC, γ= 1, R(S) = -3, P=0.8 a b c V(a3) = ???
Quiz – STOCHASTIC 1 2 3 4 STOCHASTIC, γ= 1, R(S) = -3, P=0.8 a b c V(a3) = 77 V(b3) = ???
Quiz – STOCHASTIC 1 2 3 4 STOCHASTIC, γ= 1, R(S) = -3, P=0.8 a b c V(a3) = 77 V(b3) = 48.6 N: 0.8 * 77 + 0.1(-100) + 0.1*0 – 3 = 48.6 W: 0.1 * 77 + 0.8 * 0 + 0.1 *0 – 3 = 4.7
Value Iteration and Policy - 1 What is the optimal policy?
Value Iteration and Policy - 2 1 2 3 4 STOCHASTIC, γ= 1, R(S) = -3, P=0.8 a b c What is the optimal policy? This is a situation where the risk of falling into the -100 is balanced by the time spent going around. 1 2 3 4 a b c
Value Iteration and Policy - 3 1 2 3 4 STOCHASTIC, γ= 1, R(S) = 0, P=0.8 a b c What is the optimal policy? 1 2 3 4 a b c
Value Iteration and Policy - 4 1 2 3 4 STOCHASTIC, γ= 1, R(S) = -100, P=0.8 a b c What is the optimal policy? 1 2 3 4 a b c
Markov Decision Processes • Fully Observable: S1, …, Sn; a1, …, ak • Stochastic: P(S’|a, S) • Reward: R(S) • Objective: • Value iteration: V(S) • Converges: PI = argmax…
Partial Observability +100 -100 • Fully Observable, Deterministic • Fully Observable, Stochastic
Partial Observability ? ? • Partially Observable, Stochastic MDP vs POMDP: Optimal exploration versus exploitation, where some of the actions might be information-gathering actions; whereas others might be goal-driven actions. SIGN
Partial Observability +100 +100 ? ? 50% 50% Information Space (Belief Space)