190 likes | 350 Views
Chapter 11. Inferences and the Representations of Knowledge in Operant Conditioning. Operant Conditioning is also an Inference Task. In Operant Conditioning, the individual’s task is to discover what aspects of his/her behavior will produce a certain outcome & when & where this will occur
E N D
Chapter 11 Inferences and the Representations of Knowledge in Operant Conditioning
Operant Conditioning is also an Inference Task • In Operant Conditioning, the individual’s task is to discover what aspects of his/her behavior will produce a certain outcome & when & where this will occur • Like with Pavlovian Conditioning, contingency and temporal proximity are neither necessary nor sufficient conditions for the formation of causal inferences
Consistency • This refers to how likely it is that the behavior is followed by the consequence • Although operant conditioning and operant behaviors can be influenced by this contingency, consistency is neither a necessary nor a sufficient condition for the occurrence of operant conditioning • Also, the presence of a contingency does not automatically produce conditioning • e.g., the “Miserly Raccoon”
Temporal Proximity • This occurs in operant conditioning when the consequence occurs soon after the target behavior • Generally, operant conditioning occurs more rapidly when the reinforcing event follows the target behavior with little delay • However, behaviors can still be learned even if the reinforcers are delayed • e.g., Doing something nice for someone and not being thanked till the next day still provides reinforcement for doing that nice gesture again in the future
Temporal Precedence • This concept is inherent in the operant conditioning procedure because the behavior always occurs before the consequence • Therefore, temporal precedence IS a necessary condition for the occurrence of operant conditioning
Reconciling the Data • Although temporal proximity and contingency are neither necessary nor sufficient conditions for the occurrence of operant conditioning, they are still related: • As temporal proximity declines (I.e., time b/w behavior and receipt of consequence increases), operant behaviors take longer to learn, and judgments of causality become less certain • Likewise, reductions in behavior-outcome contingencies toward zero (i.e., not providing reinforcers) decrease rates of behavior and the certainty of causality judgments
The Representations of Knowledge in Operant Conditioning • Individuals encode experience with an operant conditioning procedure as Declarative Knowledge about representations of the situation, their behavior, and the reinforcing event • Under certain conditions, representations of the situation and behavior can also be reorganized as Procedural Knowledge
Stimulus-Response (S-R) Association • Thorndike believed that operant conditioning is represented in memory as Procedural Knowledge in terms of S-R Associations • The individual learns to perform a particular activity in a given situation, and the reinforcing event (the event that causes satisfaction) serves to strengthen the link between the stimulus situation and the behavior that occurs just prior to the presentation of the reinforcer • The knowledge obtained is classified as procedural knowledge: in the situation, perform the action
Behavior-Outcome (B-O) Association • Tolman and Konorski, on the other hand, viewed operant conditioning procedures as being encoded as Declarative Knowledge in terms of B-O Associations • The representation of the stimulus situation serves as the occasion setter informing the individual that the outcome will follow a certain behavior in that situation • It tells you: In this situation, performing the action will produce a certain outcome
Difference b/w S-R and B-O Association • In S-R Associations, the reinforcing event is NOT encoded as part of what gets learned; there is no mention of the consequences • In B-O Associations, you are told what action to perform as well as the outcomes that performing that action will produce
Learning the Incentive Value of Reinforcers • Individuals come to most operant conditioning situations with knowledge of the value of the reinforcing event for them, but how do they obtain this knowledge? • They must learn it • According to Tolman, individuals obtain knowledge about the incentive values of foods, fluids, and other things by consuming them and experiencing the aftereffects of these things • If the aftereffects of consumption are experienced as positive, then the consumed object gains positive value • If the effects are experienced as negative then the consumed object gains negative value
Incentive Learning Example • Food develops positive incentive value when individuals are deprived of food (and presumably hungry) because food reduces hunger, and water develops positive incentive value when individuals are deprived of water (and presumably thirsty)
Establishing Operations • After individuals learn what things satisfy their hunger and thirst, the current incentive value of these things depends on the individual’s current motivational state • Deprivation and Satiation are Establishing Operations for the reinforcing values of things; that is, they produce changes in the internal or external environment that alter the effectiveness of things to serve as reinforcers
The Role of the Stimulus in Operant Conditioning • Discriminative Stimulus = an event that signals that a specific behavior will produce a certain outcome • It can be anything the individual is capable of detecting: sound, color, shape, object, facial expression, and so on • Discriminative Stimuli serve 2 functions: • 1) as Occasion Setters • 2) as Pavlovian Predictors of Significant Events
Discriminative Stimuli as Occasion Setters • Discriminative Stimuli inform the individual that a specific outcome may follow if he or she performs a certain action • In other words, the discriminative stimulus “sets the occasion” for the behavior-outcome relationship
Discriminative Stimuli Can Elicit Representations of Reinforcers • Because reinforcing events must occur in some context or situation, or in the presence of some stimulus, these things can elicit representations of the reinforcing events • When both the discriminative stimulus in one phase and the operant behavior in another phase are trained with the same reinforcer, the rate of behavior is higher than the baseline b/c the discriminative stimulus (light) in the first phase and the behavior (lever pressing) in the 2nd phase are connected by the reinforcer (Stimulus-Reinforcer Association)
Operant Behavior in Action • How does the individual translate the knowledge he or she obtains into action? • After experience with a discriminative stimulus-behavior-outcome relationship, subsequent presentation of the discriminative sitmulus elicits a representation of the reinforcing event • Using this representation, the individual recalls the behaviors that produced this outcome (through the behavior-outcome association) and then performs the previously successful behavior • In other words, the discriminative stimulus informs the individual that this is a place where something of value had been found in the past, and that in the past performing some action produced that something • The observed behaviors reflect the individual’s expectations about the reinforcing event in the situation
Operant Behavior as Procedural Knowledge • The effects of experience with an operant conditioning procedure can be reorganized as procedural knowledge when the individual experiences the operant conditioning procedure for an extended period of time while there is little correlation between variations in behavior and outcomes (i.e., when wide variations in performance produce the same outcome) • Devaluing the reinforcer has little effect on the performance of the target behavior
What does the transition from declarative to procedural knowledge depend on? • It appears to depend on how many behaviors are reinforced, NOT on the amount of behavior that occurs in the situation