170 likes | 180 Views
Understand the Rescorla-Wagner model in theories of conditioning for classical learning. Explore its concepts like associative strength, blocking, unblocking, extinction, inhibition, and the overexpectation effect.
E N D
PSY 402 Theories of Learning Chapter 4 – Theories of Conditioning
Rescorla-Wagner Model • Classical conditioning occurs only if the US (UCS) is surprising to the organism. • If the UCS is already predicted by a CS, then it is not surprising – it is expected. • When the CS predicts the UCS perfectly, no further learning occurs. • The asymptote (lambda, l) is the point where the learning levels off (no increase in learning occurs).
Parts of the Model • DV = ab(l – V) • V is the Associative Strength (amount of learning). • DV is the change in learning (increase in Associative Strength. • aandb are the salience of the CS and UCS • l – V is the surprisingness of the US (the distance away from the asymptote).
Multiple Conditioned Stimuli (CS’s) • The basic model explains changes in learning with one UCS and one CS. • This doesn’t explain what happens during blocking and unblocking, with multiple CS’s. • DV = ab(l – ΣV) • When multiple CS’s are present, SV is the sum of the associative strengths of all of the CS’s (such as VN + VL).
Blocking • First a noise is conditioned so that VN = 1.0 • Next a light is added. The formula predicts its associative strength: • DVL = ab(l – ΣV) • ΣV = VN + VL • If we assume that ab = .2 and VN is 0 because no learning has occurred yet, then: • DVL = .2[1.0 – (1.0 + 0)] = 0
Unblocking • As before, a noise is conditioned so that VN = 1.0 • A stronger US is presented with the new CS (VL). • As before, the formula predicts its associative strength: • DVL = ab(l – ΣV) • ΣV = VN + VL • Again, we assume that ab = .2 and VN is 0 but now the stronger US is 2.0 instead of 1.0: • DVL = .2[2.0 – (1.0 + 0)] = .2[1.0] = .2
Extinction • During extinction, the CS is presented without the UCS. • This is the same as presenting a UCS with intensity = 0. • The formula predicts the associative strength during extinction: • DVN = ab(l – V) but l is now 0 (due to extinction) • DVN = .2[0 – 1] = -.2 • The associative strength is decreasing. • Use the decreased value for VN (1-.2) for the next trial.
Inhibition • During inhibition, a second CSL is presented that has never been associated with the UCS (V = 0). • The formula predicts the associative strength for both CS’s: • DVN = ab(l – SV) and DVL = ab(l – SV) • DVN = .2[0 – (1.0 + 0)] = -.2 • DVL = .2[0 – (1.0 + 0)] = -.2 • V = VN + VL.
Protection from Extinction • When extinction of an excitor takes place together with extinction of an inhibitor, the excitor is never fully extinguished. • This is called protection from extinction. • To fully extinguish an excitor, and to extinguish it faster, pair it with another excitor (another CS associated with the US). • The model predicts both of the these results.
Overexpectation Effect • The value of a model is that it predicts new findings. • If you pair two previously conditioned CS’s (excitors) on the same trial, V for each will decrease until SV equals l. • This is because SV “overexpects” the UCS. • Similarly, if a new CS (X) is added to the pair, it will become an inhibitor.
Contextual Cues • Contextual cues consist of everything in the environment in addition the CS and UCS. • They cannot be ignored simply because the experimenter is not manipulating them. • Whenever a CS or a UCS appears “alone,” it is still being paired with the context. • When the context is considered another CS, then ideas about blocking explain learning. • Zero contingency occurs because context is blocked.
Comparator Theories • An alternative theory to Rescorla-Wagner proposes that the CS and UCS are associated and the UCS and context are associated. • The two sets of associations are compared to determine the amount of responding to the CS. • The comparison determines the responding, not the learning. • Strengthening or weakening the context, after learning, affects the amount of responding, supporting the theory.
Problems with Rescorla-Wagner • It predicts that presenting an inhibitory CS without the UCS should lead to extinction, but it doesn’t. • The model cannot account for latent inhibition (preexposure to the CS). • Mackintosh demonstrated that animals learn to ignore redundant stimuli – the model doesn’t predict this learning.
The Mackintosh Model • Mackintosh proposed that the amount of learning depends on how much attention the animal pays to the CS. • The attention to the CS is the a term in the Rescorla-Wagner model. • Alpha increases when the CS is the best predictor and conditioning occurs to the best predictor of the UCS.
Criticisms of the Mackintosh Model • The model does a good job of explaining latent inhibition and its own criticisms of Rescorla-Wagner, but other problems arose. • While attention is important, it doesn’t necessarily increase when a CS becomes the best predictor. • Hall & Pearce showed that preexposure to a tone that was a good predictor of weak shock didn’t help learning when a stronger shock was used.
Conditioned Inhibition • A CS can signal the presence of a UCS. • This is called excitation, CS+ • A CS that never appears with the UCS signals the absence of the UCS. It becomes an “all clear” signal. • This is called inhibition, CS- • In fear conditioning an excitor produces anxiety, an inhibitor produces relief or safety.
Pearce Hall Model • Animals don’t waste attention on stimuli whose meaning is already well understood. • Instead, they devote attention to understanding new stimuli. • For their model, the value of alpha depends on how surprising the UCS was on the previous trial. • If the UCS is surprising, the CS is not well understood. Alpha is high when this occurs.