1 / 12

Understanding Behavior and its Consequences in Learning Theories

This chapter explores two early approaches to learning theories: reinforcement theory and Thorndike's "Law of Effect" for cats in the puzzle box. It also discusses Guthrie's contiguity theory, Estes' stimulus sampling theory, Tolman's operational behaviorism, and Skinner's contributions to operant learning. The concepts of latent learning, stimulus control, discriminative stimuli, and behavior chains are also covered.

lidiar
Download Presentation

Understanding Behavior and its Consequences in Learning Theories

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PSY 402 Theories of Learning Chapter 7 – Behavior & Its Consequences Instrumental & Operant Learning

  2. Two Early Approaches • Reinforcement Theory • Thorndike’s “Law of Effect” for cats in the puzzle box. • Skinner boxes – rats pressing bars • Contiguity Theory • Guthrie – association is enough • Estes – Stimulus Sampling Theory

  3. Problems with Contiguity Theory • Guthrie proposed that no reinforcement was needed – just contiguity (closeness) in time and place. • If learning is immediate and one-trial, why are learning curves gradual? • Only a few stimulus elements are associated on each trial, but more build up with each trial. • His view was wrong but influential (Estes).

  4. Guthrie & Reinforcement • The reinforcer is salient, so it changes the stimulus (environmental situation). • Reward keeps competing responses from being associated with the initial stimulus. • Competing responses are instead associated with the presence of the reward. • Fixity of cat flank-rubbing supported Guthrie but was later shown to be related to the presence of the experimenter instead.

  5. Tolman’s Operational Behaviorism • His theories relied on “intervening variables” not mechanistic S-R associations. • Behavior is motivated by goals. • Behavior is flexible, a means to an end. • Rats in mazes form cognitive maps of their environment. • Animals learn about stimuli, not just behavior.

  6. Evidence of Cognitive Maps • Changing the maze layout resulted in running toward the same “goal.” • A light could have been used as a cue in both situations. • Using a “plus maze,” some rats were trained to always turn a certain direction, while others were trained to reach a consistent place. • The consistent place was easier to learn.

  7. Latent Learning • Rats were given experience in a complex maze, without reward. • Later they were rewarded for finding the goal box. • Performance (number of errors) improved greatly with reward, even among previously unrewarded rats. • Reward motivates performance, not learning.

  8. Skinner’s Contribution • Skinner was uninterested in theory – he wanted to see how learning works in practice. • Operant chambers permit behaviors to be repeated as often as desired – more voluntary. • Superstitious behavior – animals were rewarded at intervals without regard to their behaviors. • Animals related whatever they were doing to the reward, and wound up doing odd things.

  9. Stimulus Control • Skinner discovered that stimuli (cues) provide information about the opportunity for reinforcement (reward). • The stimulus sets the occasion for the behavior. • Fading – gradually transferring stimulus control from a simple stimulus to a more complex one. • Operant behavior is controlled by both stimuli and reinforcers.

  10. Discriminative Stimuli • Discriminative stimuli act as “occasion setters” (see Chap 5) in classical conditioning. • The stimulus that signals the opportunity for responding and gaining a reward is SD. • The stimulus that signals the absence of opportunity is SD.

  11. Types of Reinforcers • Primary reinforcer – stimuli or events that reinforce because of their intrinsic properties: • Food, water, sex • Secondary reinforcer – stimuli or events that reinforce because of their association with a primary reinforcer: • Money, praise, grades, sounds (clicks) • Called conditioned reinforcers.

  12. Behavior Chains • Secondary (conditioned) reinforcers reward intermediate steps in a chain of behavior leading to a primary reinforcer. • Secondary reinforcers can also be discriminative stimuli that set the occasion for more responding. • Classical conditioning is a glue that enables chains of behavior leading to a goal.

More Related