1 / 64

580.691 Learning Theory Reza Shadmehr State estimation theory

580.691 Learning Theory Reza Shadmehr State estimation theory. A. B.

yamin
Download Presentation

580.691 Learning Theory Reza Shadmehr State estimation theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 580.691 Learning Theory Reza Shadmehr State estimation theory

  2. A B Subject was instructed to pull on a knob that was fixed on a rigid wall. A) EMG recordings from arm and leg muscles. Before biceps is activated, the brain activates the leg muscles to stabilize the lower body and prevent sway due to the anticipated pulling force on the upper body. B) When a rigid bar is placed on the upper body, the leg muscles are not activated when biceps is activated. (Cordo and Nashner, 1982)

  3. A B 100 Hz Effect of eye movement on the memory of a visual stimulus. In the top panel, the filled circle represents the fixation point, the asterisk indicates the location of the visual stimulus, and the dashed circle indicates the receptive field a cell in the LIP region of the parietal cortex. A) Discharge to the onset and offset of a visual stimulus in the cell’s receptive field. Abbreviations: H. eye, horizontal eye position; Stim, stimulus; V. eye, vertical eye position. B) Discharge during the time period in which a saccade brings the stimulus into the cell’s receptive field. The cell’s discharge increased before the saccade brought the stimulus into the cell’s receptive field. (From (Duhamel et al., 1992)

  4. Why predict sensory consequences of motor commands?

  5. Subject looked at a moving cursor while a group of dots appeared on the screen for 300ms. In some trials the dots would remain still (A) while in other trials they would move together left or right with a constant speed (B). Subject indicated the direction of motion of the dots. From this result, the authors estimated the speed of subjective stationarity, i.e., the speed of dots for which the subject perceived them to be stationary. C) The unfilled circles represent performance of control subjects. Regardless of the speed of the cursor, they perceived the dots to be stationary only if their speed was near zero. The filled triangles represent performance of subject RW. As the speed of the cursor increased, RW perceived the dots to be stationary if their speed was near the speed of the cursor. (Haarmeier et al., 1997)

  6. Disorders of agency in schizophrenia relate to an inability to compensate for sensory consequences of self-generated motor commands. In a paradigm similar to that shown in the last figure, volunteers estimated whether during motion of a cursor the background moved to the right or left. By varying the background speed, at each cursor speed the experimenters estimated the speed of perceptual stationarity, i.e., the speed of background motion for which the subject saw the background to be stationary. They then computed a compensation index as the difference between speed of eye movement and speed of background when perceived to be stationary, divided by speed of eye movement. The subset of schizophrenic patients who had delusional symptoms showed a greater deficit than control in their ability to compensate for sensory consequences of self-generated motor commands. (From (Lindner et al., 2005))

  7. Combining predictions with observations A C B D

  8. Parameter variance depends only on input selection and noise A noisy process produces n data points and we form an ML estimate of w. We run the noisy process again with the same sequence of x’s and re-estimate w: The distribution of the resulting w will have a var-cov that depends only on the sequence of inputs, the bases that encode those inputs, and the noise sigma.

  9. The Gaussian distribution and its var-cov matrix A 1-D Gaussian distribution is defined as In n dimensions, it generalizes to When x is a vector, the variance is expressed in terms of a covariance matrix C,where ρij corresponds to the degree of correlation between variables xi and xj

  10. 3 3 2 2 1 1 4 3 0 0 2 -1 -1 1 -2 -2 0 -1 -2 -1 0 1 2 3 -2 -1 0 1 2 3 -2 -3 -3 -2 -1 0 1 2 3 4 x1 and x2 are positively correlated x1 and x2 are not correlated x1 and x2 are negatively correlated

  11. 2 1.5 1 0.5 0 -0.5 -0.5 0 0.5 1 1.5 2 • Parameter uncertainty: Example 1 • Input history: x1 was “on” most of the time. I’m pretty certain about w1. However, x2 was “on” only once, so I’m uncertain about w2.

  12. 2 1.5 1 0.5 0 -0.5 -0.5 0 0.5 1 1.5 2 • Parameter uncertainty: Example 2 • Input history: x1 and x2 were “on” mostly together. The weight var-cov matrix shows that what I learned is that: I do not know individual values of w1 and w2 with much certainty. x1 appeared slightly more often than x2, so I’m a little more certain about the value of w1.

  13. 2 1.5 1 0.5 0 -0.5 -0.5 0 0.5 1 1.5 2 • Parameter uncertainty: Example 3 • Input history: x2 was mostly “on”. I’m pretty certain about w2, but I am very uncertain about w1. Occasionally x1 and x2 were on together, so I have some reason to believe that:

  14. Effect of uncertainty on learning rate • When you observe an error in trial n, the amount that you should change w should depend on how certain you are about w. The more certain you are, the less you should be influenced by the error. The less certain you are, the more you should “pay attention” to the error. mx1 mx1 error Kalman gain Rudolph E. Kalman (1960) A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82 (Series D): 35-45. Research Institute for Advanced Study 7212 Bellona Ave, Baltimore, MD

  15. Example of the Kalman gain: running estimate of average w(n) is the online estimate of the mean of y Past estimate New measure As n increases, we trust our past estimate w(n-1) a lot more than the new observation y(n) Kalman gain: learning rate decreases as the number of samples increase

  16. Example of the Kalman gain: running estimate of variance sigma_hat is the online estimate of the var of y

  17. Some observations about variance of model parameters We note that P is simply the var-cov matrix of our model weights. It represents the uncertainty in our estimates of the model parameters. We want to update the weights in such a way as to minimize the trace of this variance. The trace is the sum of the squared errors between our estimates of w and the true estimates.

  18. Trace of parameter var-cov matrix is the sum of squared parameter errors Our objective is to find learning rate k (Kalman gain) such that we minimize the sum of the squared error in our parameter estimates. This sum is the trace of the P matrix. Therefore, given observation y(n), we want to find k such that we minimize the variance of our estimate w.

  19. Objective: adjust learning gain in order to minimize model uncertainty Hypothesis about data observation in trial n my estimate of w* before I see y in trial n, given that I have seen y up to n-1 error in trial n my estimate after I see y in trial n a prior variance of parameters a posterior variance of parameters

  20. Evolution of parameter uncertainty

  21. Find K to minimize trace of uncertainty

  22. Find K to minimize trace of uncertainty scalar

  23. The Kalman gain If I have a lot of uncertainty about my model, P is large compared to sigma. I will learn a lot from the current error. If I am pretty certain about my model, P is small compared to sigma. I will tend to ignore the current error.

  24. Update of model uncertainty Model uncertainty decreases with every data point that you observe.

  25. Hidden variable In this model, we hypothesize that the hidden variables, i.e., the “true” weights, do not change from trial to trial. Observed variables A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

  26. In this model, we hypothesize that the hidden variables change from trial to trial. A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

  27. Uncertainty about my model parameters Uncertainty about my measurement • Learning rate is proportional to the ratio between two uncertainties: my model vs. my measurement. • After we observe an input x, the uncertainty associated with the weight of that input decreases. • Because of state update noise Q, uncertainty increases as we form the prior for the next trial.

  28. 2. Bayesian state estimation 3. Causal Inference 4. The influence of priors 5. Behaviors that are not Bayesian

  29. Comparison of Kalman gain to LMS See derivation of this in homework In the Kalman gain approach, the P matrix depends on the history of all previous and current inputs. In LMS, the learning rate is simply a constant that does not depend on past history. With the Kalman gain, our estimate converges on a single pass over the data set. In LMS, we don’t estimate the var-cov matrix P on each trial, but we will need multiple passes before our estimate converges.

  30. 5 4.5 4 3.5 5 3 4.5 2.5 4 2 3.5 3 2 4 6 8 10 2.5 0.8 2 0.75 2 4 6 8 10 0.7 0.65 0.8 0.6 0.55 0.75 0.5 2 4 6 8 10 0.7 0.65 2 4 6 8 10 Effect of state and measurement noise on the Kalman gain High noise in the state update model produces increased uncertainty in model parameters. This produces high learning rates. High noise in the measurement also increases parameter uncertainty. But this increase is small relative to measurement uncertainty. Higher measurement noise leads to lower learning rates.

  31. 5 4 3 2 1 2 4 6 8 10 0.8 0.75 0.7 0.65 0.6 0.55 0.5 2 4 6 8 10 Effect of state transition auto-correlation on the Kalman gain Learning rate is higher in a state model that has high auto-correlations (larger a). That is, if the learner assumes that the world is changing slowly (a is close to 1), then the learner will have a large learning rate.

  32. Kalman filter as a model of animal learning Suppose that x represents inputs from the environment: a light and a tone. Suppose that y represents rewards, like a food pellet. Animal’s model of the experimental setup Animal’s expectation on trial n Animal’s learning from trial n

  33. Various forms of classical conditioning in animal psychology Not explained by LMS, but predicted by the Kalman filter. Table from Peter Dayan

  34. 1 y * x2 x1 0 0 10 20 30 40 yhat y 1.5 1.5 1 1 0.5 0.5 yhat y 0 0 10 20 30 40 10 20 30 40 0.5 0.7 0.8 0.45 0.6 0.4 0.5 0.6 0.35 0.4 0.4 0.3 0.4 0.3 0.35 0.2 0.25 P22 0.2 P11 0.1 w2 0.3 w1 0.2 w2 0 10 20 30 40 0.25 w1 0 10 20 30 40 0 0.2 0 10 20 30 40 0.15 k2 k1 0.1 10 20 30 40 Sharing Paradigm Train: {x1,x2} -> 1 Test: x1 -> ?, x2 -> ? Result: x1->0.5, x2->0.5 Learning with Kalman gain LMS

  35. Blocking Kamin (1968) Attention-like processes in classical conditioning. In: Miami symposium on the prediction of behavior: aversive stimulation (ed. MR Jones), pp. 9-33. Univ. of Miami Press. Kamin trained an animal to continuously press a lever to receive food. He then paired a light (conditioned stimulus) and a mild electric shock to the foot of the rat (unconditioned stimulus). In response to the shock, the animal would reduce the lever-press activity. Soon the animal learned that the light predicted the shock, and therefore reduced lever pressing in response to the light. He then paired the light with a tone when giving the electric shock. After this second stage of training, he observed than when the tone was given alone, the animal did not reduce its lever pressing. The animal had not learned anything about the tone.

  36. 1 * y x2 x1 0 0 10 20 30 40 0.5 0.4 0.3 0.2 0.1 P22 P11 1.5 1.25 10 20 30 40 1.5 1 1 0.75 0.6 1 0.5 0.5 0.5 yhat y 0.25 0.4 0.5 0 0 w2 10 20 30 40 0.3 yhat w1 y -0.25 0 0.2 0 10 20 30 40 10 20 30 40 1.2 0.1 k2 k1 1 0 0.8 10 20 30 40 0.6 0.4 0.2 w2 0 w1 -0.2 0 10 20 30 40 Blocking Paradigm Train: x1 -> 1, {x1,x2} -> 1 Test: x2 -> ?, x1 -> ? Result: x2 -> 0, x1 -> 1 Learning with Kalman gain LMS

  37. 1 y * x2 x1 0 0 10 20 30 40 50 60 0.5 1.5 0.4 1.5 0.3 1 0.2 1 0.5 yhat 0.1 P22 y P11 0.5 0 yhat 0 10 20 30 40 50 60 0 10 20 30 40 50 60 y 0 0 10 20 30 40 50 60 0.4 1 0.8 0.2 1 0.6 0.8 0 0.4 0.2 0.6 -0.2 0 w2 k2 0.4 w1 k1 -0.2 -0.4 0.2 0 10 20 30 40 50 60 0 10 20 30 40 50 60 w2 w1 0 0 10 20 30 40 50 60 Backwards Blocking Paradigm Train: {x1,x2} -> 1, x1 -> 1 Test: x2 -> ? Result: x2 -> 0 Learning with Kalman gain LMS

  38. Different output models Suppose that x represents inputs from the environment: a light and a tone. Suppose that y represents a reward, like a food pellet. Case 1: the animal assumes an additive model. If each stimulus predicts one reward, then if the two are present together, they predict two rewards. Case 2: the animal assumes a weighted average model. If each stimulus predicts one reward, then if the two are present together, they still predict one reward, but with higher confidence. The weights b1 and b2 should be set to the inverse of the variance (uncertainty) with which each stimulus x1 and x2 predicts the reward.

  39. General case of the Kalman filter nx1 mx1 A priori estimate of mean and variance of the hidden variable before I observe the first data point Update of the estimate of the hidden variable after I observed the data point Forward projection of the estimate to the next trial

  40. How to set the initial var-cov matrix In homework, we will show that in general: Now if we have absolutely no prior information on w, then before we see the first data point P(1|0) is infinity, and therefore its inverse in zero. After we see the first data point, we will be using the above equation to update our estimate. The updated estimate will become: A reasonable and conservative estimate of the initial value of P would be to set it to the above value. That is, set:

  41. Data fusion Suppose that we have two sensors that independently measure something. We would like to combine their measures to form a better estimate. What should the weights be? Suppose that we know that sensor 1 gives us measurement y1 and has Gaussian noise with variance: And similarly, sensor 2 has gives us measurement y2 and has Gaussian noise with variance: A good idea is to weight each sensor inversely proportional to its noise:

  42. Data fusion via Kalman filter To see why this makes sense, let’s put forth a generative model that describes our hypothesis about how the data that we are observing is generated: Hidden variable Observed variables

  43. priors our first observation variance of our posterior estimate See homework for this

  44. The real world What our sensors tell us Notice that after we make our first observation, the variance of our posterior is better than the variance of either sensor.

  45. 0.5 0.4 0.3 0.2 0.1 0 -2 0 2 4 6 8 10 0.4 0.3 0.2 0.1 0 -2.5 0 2.5 5 7.5 10 12.5 15 Combining equally noisy sensors Combining sensors with unequal noise Sensor 1 Sensor 1 Sensor 2 Combined Combined Sensor 2 probability Mean of the posterior, and its variance

  46. Puzzling results: Savings and memory despite “washout” Gain=eye displacement divided by target displacement 1 Result 1: After changes in gain, monkeys exhibit recall despite behavioral evidence for washout. Kojima et al. (2004) Memory of learning facilitates saccade adaptation in the monkey. J Neurosci 24:7531.

  47. Puzzling results: Improvements in performance without error feedback Result 2: Following changes in gain and a long period of washout, monkeys exhibit no recall. Result 3: Following changes in gain and a period of darkness, monkeys exhibit a “jump” in memory. Kojima et al. (2004) J Neurosci 24:7531.

  48. The learner’s hypothesis about the structure of the world • 1. The world has many hidden states. What I observe is a linear combination of these states. • The hidden states change from trial to trial. Some change slowly, others change fast. • The states that change fast have larger noise than states that change slow. A slow system fast system state transition equation output equation

  49. 0.6 1.25 1.5 1 0.4 1 0.75 0.2 0.5 0.5 0.25 0 0 0 -0.2 -0.5 -0.25 k2 w2 k1 w1 -0.5 -1 yhat -0.4 y 0 50 100 150 200 250 300 0 50 100 150 200 250 300 -1.5 0 50 100 150 200 250 300 1 Simulations for savings * y 0 x2 x1 -1 0 50 100 150 200 250 300 The critical assumption is that in the fast system, there is much more noise than in the slow system. This produces larger learning rate in the fast system.

  50. 1 0.5 0 -0.5 * y x2 x1 -1 0 50 100 150 200 250 300 1.5 1 0.5 0 -0.5 -1 yhat y -1.5 0 50 100 150 200 250 300 0.4 0.2 0 -0.2 w2 w1 -0.4 0 50 100 150 200 250 300 Simulations for spontaneous recovery despite zero error feedback error clamp In the error clamp period, estimates are made yet the weight update equation does not see any error. Therefore, the effect of Kalman gain in the error-clamp period is zero. Nevertheless, weights continue to change because of the state update equations. The fast weights rapidly rebound to zero, while the slow weights slowly decline. The sum of these two changes produces a “spontaneous recovery” after washout.

More Related