1 / 17

Understanding Classical Conditioning: From Sensory Experience to Cognitive Decision

Learn about classical conditioning, habituation, and operant conditioning, and how these processes connect sensory experience to cognitive decisions. Explore the cognitive and neural elements of classical conditioning and the evolution of adaptive behaviors. Discover the principles of operant conditioning and the influence of consequences on behavior.

hostetter
Download Presentation

Understanding Classical Conditioning: From Sensory Experience to Cognitive Decision

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning • All learnng begins with association: sensory experience connects to emotions, which lead to cognitive decisions (sometimes conscious, sometimes not). • habituation: a general process in which repeated or prolonged exsposure to a stimulus results in a gradual reduction in responding. • Kandel, 2006: Aplysia exhibits another form of learning–sensitization--which occurs when presentation of a noxious stimulus leads to increased response to a later stimulus. • This leads us to Pavlov and classical conditioning. • US: unconditioned stimulus-produces a reliable, naturally occurring reaction for an organism. • UR: unconditioned response-a reflexive reaction that is reliably produced by an unconditioned stimulus. Puppies already drool at the sight of food. • Memorize 'The Elements of Classical Conditioning' for the next exam.

  2. Classical Conditioning • CS: conditioned stimulus-a previously neutral stimulus the produces a reliable response in an organism after being paired with an US. • CR: conditioned response-a reaction that resembles an unconditioned response but is produced by a CS. • Acquisition: the phase of classical conditioning when the CS and the US are presented together. After learning is established, the CS by itself will reliably elicit the CR. • Memorize the figure 'Acquisition, Extinction, and Spontaneous Recovery' for the next exam. • Second-order conditioning: although money is not directly associated with the thrill of a new sports car, it is directly associated wiht the CS that results in gratifyng outcomes. Such repeated exposure means that eventually money is desirable for its own sake. • Extinction: the gradual elimination of a learned response that occurs when the CS is repeatedly presented without the US. Based on LTP.

  3. Classical Conditioning • Generalization: the CR is elicited even though the stimulus is slightly different than the CS used during acquisition. • Discrimination: the capacity to distinguish between similar but distinct stimuli. • Siegel's work with drug overdoses: why do experienced drug users die from overdoses in novel environments? How does Pavlovian conditioning apply to this situation. How does Web Article Three apply? Hint: many Crs are compensatory reactions ot the US.

  4. Cognitive & Neural Elements of Classical Conditioning • Pavlov's dogs were sensitive to the fact that he was not a reliable indicator of the arrival of food. • Rescorla and Wagner (1972) were the first to theorize that classical conditioning occurs when the animal has learned to set up an expectation. This in turn leads to an array of behaviours associated with the presence of the CS. • Their model predicted that conditioning would be easier when the CS was an unfamiliar event. Classical conditioning incorporates a significant cognitive element. • Studies of classical conditioning in humans indicate that conditioning can occur without conscious awareness of the relationship between the CS and the US. • Thompson (2005) demonstrated that the cerebellum critical for the occurrence of eyeblink (classical) conditioning. • The central nucleus of the amygdala is critical for fear conditioning, such as biological freezing (LeDoux et al., 1988)

  5. Evolution & Classical Conditioning • Behaviours that are adaptive allow an organism to survive and thrive in its environment. • Garcia & Koellig, 1966 used a variety of CS that caused nausea and vomiting in rats hours later. They found weak or no conditioning when the CS was visual, auditory, or tactile, but strong food aversion. • There is an error in the text: http://www.ratbehavior.org/vomit.htm • Evolution has provided species with biological preparedness, a propensity for learning particular kinds of associations but not others. For example, birds depend primarily on visual cues for finding food and are relatively insensitive to taste and smell. • It is relatively easy to produce a food aversion in birds using an unfamiliar stimulus as the CS, such as brightly coloured food. • This difference–smell for mammals, vision for birds–has its roots in the Triassic period, about 220 million years ago.

  6. Operant Conditioning • Operant conditioning: a type of learning in which the consequence's of an organism's behaviour determine the likelihood of that behaviour being repeated in the future. • Thorndike's puzzle box is an example of such instrumental behaviours. • At first, the cat enacts any number of likely but unsuccessful behaviours, but only one leads to food and freedom. Over time, the unsuccessful behaviours become less frequent, and the one instrumental behaviour becomes more so. • Thorndike's work resonated with most behaviourists at the time: it was still observable, quantifiable, and free from explanations involving the mind. • B.F. Skinner redefined this as: operant behaviour that an organism produces that has some impact on the environment. Most organisms actively engage the environment to reap rewards.

  7. Reward OR Punishment • Reinforcer: any stimulus or event that functions to increase the likelihood of the behaviour that led to it. • Punishment: any stimulus or event that functions to decrease the likelihood of the behaviour that led to it. • Memorize the table 'Reinforcement and Punishment' for the next exam. • Reinforcement is generally more effective than punishment in promoting learning. Why? • Punishment signals that an unacceptable behaviour has occurred, but does not specify what should be done instead. • For study purposes, think R vs. P; they are essentially two separate systems of operant conditioning.

  8. Reinforcers & Punishers • Primary reinforcers satisfy biological needs. • Secondary reinforcers derive their effectiveness from associations with primary reinforcers. Eg: bitcoin is a neutral CS, until paired with food or shelter. • A key determinant in the effectiveness of a reinforcer is the amount of time between the occurrence of the behaviour and the reinforcer. The more time that elapses, the less effective. • Memorize the figure 'Delay of Reinforcement' for the next exam. • The greater potency of immediate versus delayed reinforcement makes it difficult to quit smoking or to lose weight. With respect to smoking, what does this mean about the importance of Web Article Three? • Consider the opposite: the longer the delay between a behaviour and the adminstration of a punishment, the less effective in suppressing the target behaviour. Finally, punishment can be turned into reward: Vor y Zakone.

  9. Reinforcement Schedules • Memorize the figure 'Reinforcement Schedules' for the next exam. • Fixed Interval: Christmas. • Variable Interval: Start Up / Semester (best indicator of life success) • Fixed Ratio: Piecework • Variable Ratio: Gambling • Variable interval schedules typically produce steady, consistent responding because the time until the next reinforcement is less predictable. Although a semester system seems like a fixed interval, it is not, as the problems encountered during a semester have less predictable reinforcements. So, I don't agree with the text here. This schedule is what produces billionaires. • Intermittent reinforcement: only some of the responses made are followed by reinforcement. The more irregular and intermittent a schedule is, the more difficult it becomes for an organism to detect when the schedule is actually about to become extinct.

  10. Shaping by Successive Approximations • Shaping: the reinforcement of successive steps to a final desired behaviour. The outcome of one set of behaviours shapes the next set of behaviours. • A small reward is given for each behaviour that is approximating the final goal. Note that there is no element of punishment. • Any behaviour that is accidentally but successfully reinforced will be repeated, and this can result in idiosyncratic, superstitious behaviours. • Accidental relationships can therefore appear to have a cause-and-effect chain. • Bloom et al., 2007 reported that reinforcing human adults or children using schedules in which reinforcement is not contingent on their responses can produce seeming superstitious behaviour. • Most human superstitions can be attributed to this kind of scheduling. In the philosophy of science, it is called a 'post hoc' fallacy (from the Latin: 'post hoc, ergo propter hoc'), or 'after this, therefore because of this'.

  11. Cognitive Elements of Operant Conditioning • Tolman proposed that animals established a means-end relationship. Conditioning experience produced knowledge: a specific reward (end-state) will appear if a specific response (the means) is made. • The stimulus does not directly evoke a response; rather, it establishes an internal cognitive state which then produces the behaviour. • Latent learning: something is learned, but is not manifested in a behaviour change until sometime in the future. • Cognitive map: a mental representation of the physical features of the environment. • Memorize the figure 'Latent Learning' for the next exam. • Figure 'Cognitive Maps' indicates that the rats had formed a cognitive map of their environment and and knew where they needed to end up spatially, compared to where they began. Strict behaviourism theorized that the rats would have backtracked, and tried the next entrance on other side of their first attempt.

  12. Neural Elements of Operant Conditioning • Memorize the figure 'Pleasure Centres in the Brain' for the next exam. • Olds and Milner (1956) discovered that when placing electrodes in a rat's brain, and allowing the rat to stimulate itself, it would do so, ignoring food and water. The likely cause was the dopamine pathway to the nucleus accumbens. • During recent years, several competing hypotheses about the precise role of dopamine have emerged: (1) it is more closely linked with the expectation of reward (rather than reward itself); (2) it more closely associated with wanting or craving something than simply liking it. • Drugs such as cocaine, amphetamines and opiates activate the dopamine pathway; but dopamine-blocking drugs dramatically diminish their reinforcing effects. • FMRI studies show activity in the nucleus accumbens in heterosexual men when looking at pictures of attractive women, and in individuals who believe they are about to receive money. • These biological structures evolved to ensure that a species engage in activities that aid survival and reproduction.

  13. Evolutionary Elements of Operant Conditioning • A complex T maze simulates the natural environment. Like many other foraging species, rats placed in a complex t-maze show evidence of their evolutionary preparedness. These rats will systematically travel from arm to arm in search of food, never returning to the arms they have already visited. • Breland & Breland, 1961 reported that pigs are biologically predisposed to root out their food, just as raccoons are predisposed to wash their food. Trying to train either species to pick up a coin and drop it in a box had ironic consequences. • To answer the Triassic question in this evolutionary manner: since birds have air sacs plus lungs, they are able to process oxygen more effectively than mammals, who breath in and out using a diaphragm. The Triassic era was very hot, with lower oxygen levels. This forced the mammals underground in the dark, where smell dominated over vision, and gave the earliest dinosaurs the daylight world, as they could move much more quickly about, and so birds-eyes evolved without blood vessels in the retina, giving them 4 to 5 times better vision than mammals; many mammals still do not have color vision as a result of their evolutionary path.

  14. Observational Learning in Humans • Observational learning: a condition in which learning takes place by watching the actions of others. Even complex motor skills, such as surgery, are learning in part through extensive observation and imitation of models. • Figure 'Beating Up Bobo': (Bandura, 1961, 1963, 1977). The adult model purposely used novel behaviours so that the researchers could distinguish aggressive acts that were clearly the result of observational learning. • When they saw adult models being punished for behaving aggressively, the children showed considerably less aggression. • When they saw adult models being rewarded and praised (secondary reinforcement) for aggressive behaviour, they displayed an increase in aggression. • Diffusion chain: a process in which individuals initially learn a behaviour by observing another individual perform that behaviour, and then serve as a model from which other individuals learn the behaviour. • Studies have shown that observational learning sometimes results in just as much learning as practising the task itself. (Heyes & Foster. 2002).

  15. Neural Elements in Observational Learning • Mirror Neuron System: fires when a primate performs an action, and they also fire when the primate watches another of its own or similar species perform the same action. (Monkey see, monkey do.) • Mirror neurons are thought to be represented in specific sub-regions in the frontal and parietal lobes. • If appropriate neurons fire when another organism is performing an action, it could indicate the awareness of intentionality, or that the animal is anticipating a likely course of future actions. • Observation learning also relies on the motor cortex. To examine whether observational learning depends on this area, TMS was applied just after participants observed performance of a reaching movement, causing a temporary disruption in that brain region. • Applying TMS to the motor cortex greatly reduced the amount of observational learning, whereas applying TMS to a control area outside the motor cortex had no effect on observational learning.

  16. Implicit Learning: Cognitive & Neural • Implicitlearning takes place largely independent of awareness of both the process and products of information acquisition. We are attuned to linguistic, social, emotional, or sensorimotor events in our environment that internal representations are gradually built up without explicit awareness. Explicit learning becomes implicit over time. • Reber, 1967: Artificial grammar experiments: Participants gradually developed a vague, intuitive sense of the 'correctness' of particular letter groupings. They became quite good at this task, but were unable to demonstrate much in the way of explicit awareness of the rules and regularities they were using. • Implicit learning differs very little from person to person; is unrelated to I.Q.; infants are just as good at it as university students; it changes little across the lifespan. • Implicit learning is remarkably resistant to various disorders that effect explicit learning. • Remember this section of the text for your next long essay: amnesic patients show normal implicit memories, and display normal implicit learning. (Knowlton, 1992).

  17. Implicit & Explicit Learning: Neural Pathways • Memorize 'Implicit & Explicit Learning Activate Different Brain Areas' for the next exam. The hippocampus and nearby structures in the medial temporal lobe do not seem to be necessary for implicit learning. • Reber et al., 2002 : Array of Stars participants who were given explicit instructions as to how to determine the underlying prototypical dot pattern showed increased activity in the prefrontal cortex, parietal cortex, and hippocampus, all associated with the processing of explicit memories. Those given the implicit instructions showed decreased activity in the occipital region (which is involved in visual processing). Distinct brain structures were recruited in different ways depending on explicit or implicit instructions. • Forkstam et al., 2006: Broca's area is turned on during artificial grammar learning. • Activating Broca's area by applying electrical stimulation to the nearby scalp area enhances implicit learning.

More Related