530 likes | 544 Views
This book by Miguel Ángel Soto Santibáñez delves into using Q-learning algorithms in reinforcement learning, tackling challenges, like needing real-valued rewards, with innovative solutions through dynamic prototypes. The book explores Q-learning algorithms and how they optimize control strategies for autonomous agents in Markov decision processes. It presents a revolutionary approach that overcomes limitations of static prototypes and memory requirements in traditional methods. The proposed method efficiently updates the Q-function using a regression tree model, offering a practical solution for a wide range of problems in Reinforcement Learning.
E N D
ONLINE Q-LEARNER USING MOVING PROTOTYPES by Miguel Ángel Soto Santibáñez
Reinforcement Learning What does it do? Tackles the problem of learning control strategies for autonomous agents. What is the goal? The goal of the agent is to learn an action policy that maximizes the total reward it will receive from any starting state.
Reinforcement Learning What does it need? This method assumes that training information is available in the form of a real-valued reward signal given for each state-action transition. i.e. (s, a, r) What problems? Very often, reinforcement learning fits a problem setting known as a Markov decision process (MDP).
Reinforcement Learning vs. Dynamic programming reward function r(s, a) r state transition function δ(s, a) s’
Q-learning An off-policy control algorithm. Advantage: Converges to an optimal policy in both deterministic and nondeterministic MDPs. Disadvantage: Only practical on a small number of problems.
Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of the episode) Choose a from s using an exploratory policy Take action a, observe r, s’ Q(s, a)Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ ss’
Introduction to Q-learning Algorithm • An episode: { (s1, a1, r1), (s2, a2, r2), … (sn, an, rn), } • s’: δ(s, a) s’ • Q(s, a): • γ, α :
B r = 8 A A Sample Problem r = 0 r = - 8
N S E W States and actions states: actions:
The Q(s, a) function states a c t i o n s
Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of the episode) Choose a from s using an exploratory policy Take action a, observe r, s’ Q(s, a)Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ ss’
Initializing the Q(s, a) function states a c t i o n s
Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of the episode) Choose a from s using an exploratory policy Take action a, observe r, s’ Q(s, a)Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ ss’
Q-learning Algorithm Initialize Q(s, a) arbitrarily Repeat (for each episode) Initialize s Repeat (for each step of the episode) Choose a from s using an exploratory policy Take action a, observe r, s’ Q(s, a)Q(s, a) + α[r + γ max Q(s’, a’) – Q(s, a)] a’ ss’
1st step: 2nd step: 3rd step: 4th step: Calculating new Q(s, a) values
The Q(s, a) function after the first episode states a c t i o n s
1st step: 2nd step: 3rd step: 4th step: Calculating new Q(s, a) values
The Q(s, a) function after the second episode states a c t i o n s
The Q(s, a) function after a few episodes states a c t i o n s
One of the optimal policies states a c t i o n s
Another of the optimal policies states a c t i o n s
The problem with tabular Q-learning What is the problem? Only practical in a small number of problems because: a) Q-learning can require many thousands of training iterations to converge in even modest-sized problems. b) Very often, the memory resources required by this method become too large.
Solution What can we do about it? Use generalization. What are some examples? Tile coding, Radial Basis Functions, Fuzzy function approximation, Hashing, Artificial Neural Networks, LSPI, Regression Trees, Kanerva coding, etc.
Shortcomings • Tile coding: Curse of Dimensionality. • Kanerva coding: Static prototypes. • LSPI: Require a priori knowledge of the Q-function. • ANN: Require a large number of learning experiences. • Batch + Regression trees: Slow and requires lots of memory.
Needed properties 1) Memory requirements should not explode exponentially with the dimensionality of the problem. 2) It should tackle the pitfalls caused by the usage of “static prototypes”. 3) It should try to reduce the number of learning experiences required to generate an acceptable policy. NOTE: All this without requiring a priori knowledge of the Q-function.
Overview of the proposed method 1) The proposed method limits the number of prototypes available to describe the Q-function (as Kanerva coding). 2) The Q-function is modeled using a regression tree (as the batch method proposed by Sridharan and Tesauro). 3) But prototypes are not static, as in Kanerva coding, but dynamic. 4) The proposed method has the capacity to update the Q-function once for every available learning experience (it can be an online learner).
Basic operations in the regression tree Rupture Merging
children children parent parent Rules for a sound tree
The “smallest predecessor” Sample Merging
List 1 Sample Merging
Sample Merging The node to be inserted
List 1 List 1.1 List 1.2 Sample Merging
Reward Detectors’ Signals Actuators’ Signals The agent Agent
BOOK STORE Applications
Results third application Reason for this experiment: Evaluate the performance of the proposed method in a scenario that we consider ideal for this method, namely one, for which there is no application specific knowledge available. What took to learn a good policy: • Less than 2 minutes of CPU time. • Less that 25,000 learning experiences. • Less than 900 state-action-value tuples.