1 / 24

Reinforcement learning This is mostly taken from Dayan and Abbot ch. 9

Reinforcement learning This is mostly taken from Dayan and Abbot ch. 9 Reinforcement learning is different than supervised learning in that there is no all knowing teacher, the reinforcement signal carries less information. Central problem – temporal credit assignment.

Download Presentation

Reinforcement learning This is mostly taken from Dayan and Abbot ch. 9

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reinforcement learning This is mostly taken from Dayan and Abbot ch. 9 Reinforcement learning is different than supervised learning in that there is no all knowing teacher, the reinforcement signal carries less information. Central problem – temporal credit assignment.

  2. Example: Spatial learning is impaired by block of NMDA receptors (Morris, 1989) platform Morris water maze rat

  3. Solving this problem is comprised of two separate tasks. • Predicting reward • Choosing the correct action • or • Policy evaluation (critic) • Policy improvement (actor)

  4. Classical vs. instrumental conditioning Classical think -> Pavlov dog In instrumental the animal is rewarded for “correct” actions, and not, or even punished for incorrect. In instrumental (Operant) what the animal does (Policy) matters.

  5. Predicting reward – Rascola-Wagner rule Notation u – stimulus r - reward v – expected reward w – weight (filter) With: For more than one stimulus:

  6. Random reward Learning, r=1 Extinction, r=0

  7. Predicting future reward: Temporal Difference learning In more realistic conditions, especially in operant conditioning the actual reward might come some time after the signal for the reward. What we might care about is not the immediate reward at this time point, but rather the total reward predicted given the choice made at this time. How can we estimate the total reward? Total averagefuture reward at time t: Assume that we estimate this with a linear estimator:

  8. Use the δ rule at time t: Where δis the difference between the actual future rewards, and the prediction of these rewards: But, we do not know Instead we can approximate this by:

  9. (1) (2) Which gives us: The temporal difference learning rule then becomes:

  10. Dopamine and predicted reward Activity of VTA doparminergic neurons in a monkey. A. top- before learning, bottom after learning B. After learning. top- with reward, bottom – no reward

  11. Generalization of TD(0) • u can be a vector u, so w is also a vector. This is for more complex, or multiple possible stimuli. • A decay term. Here: Current location Location moved to after action a This has the effect of putting a stronger emphasis on rewards that take fewer steps to reach.

  12. Until now – how do we predict a reward. Still need to see how we make decisions of which path to take, or what policy to use. Describe bee foraging example: Different reward for each flower ? Different reward for each flower P(rb) and P(ry)

  13. Learn “action values” mb and my (the actor), these will determine which choice to make. Assume rb=1, ry=2, what is the best choice we can make? The average reward is: What will maximize this reward?

  14. Learn “action values” mb and my, these will determine which choice to make. Use softmax: This is a stochastic choice, β is a variability parameter. A good choice for the “action values”: is to set them to the mean reward: This is also called “indirect actor” (???)

  15. How good is this choice? Assume β=1, rb=1, ry=2, what is <r> >> rb=1;ry=2; >> pb=exp(rb)/(exp(rb)+exp(ry))pb = 0.2689 >> py=exp(ry)/(exp(rb)+exp(ry))py = 0.7311 >> r_av=rb*pb+ry*pyr_av = 1.7311

  16. This choice can be learned using a delta rule t<100; rb=1, ry=2 t >100; rb=2, ry=1 β=1 β=50

  17. Another option (direct actor ???) is to set the activation values to maximize the expected reward: This can be done by stochastic gradient decent on <r> For example: So that generally for actions variable mx given action a: A good choice for r0 is the mean of rx over all possible choices. (See D&A book pg 344)

  18. The Maze task and sequential action choice Policy evaluation Policy evaluation: Initial random policy What would it be for an ideal policy?

  19. Policy improvement Using the direct actor learn to improve the policy. At A: ? For left turn For right turn Note – policy improvement and policy evaluation are best carried out sequentially: evaluate – improve – evaluate – Improve …

  20. V(B)=2.5 V(C)=1 V(a)=1.75

  21. Reinforcement learning - summary

More Related