490 likes | 1.14k Views
Psyc 317: Cognitive Psychology. Lecture 8: Knowledge. Outline. Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain. Categorization is hierarchical.
E N D
Psyc 317: Cognitive Psychology Lecture 8: Knowledge
Outline • Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain
Categorization is hierarchical • So we have levels of categories • How can all of this be represented in the mind? • Semantic network approach
Collins & Quillian’s Model • Nodes are bits of information • Links connect them together Semantic network template Simple semantic network
Get more complicated! • Add properties to nodes:
How does the network work? • Example: Retrieve properties of canary
Why not store it all at the node? • To get “can fly” and “has feathers,” you must travel up to bird • Why not put it all at canary? • Cognitive economy: Putting common properties at each node is too inefficient • More efficient to put “cannot fly” at exception nodes
How do we know this works?Collins & Quillian (1969) • Ask participants about canaryproperties that require more traversal vs.
Link Traversal Demo Yes or no: • A German Shepherd is a type of dog. • A German Shepherd is a mammal. • A German Shepherd barks. • A German Shepherd has skin.
Spreading activation:Priming the Network • An activated node spreads its activation to connected links
Spreading Activation WorksMeyer & Schvaneveldt (1971) • Lexical decision task: Are the two letter strings both words? Associated
Meyer & Schvaneveldt Results * Associated words prime each other
Collin & Quillian Criticisms • Typicality effect is not explained - ostrich and canary are one link away from bird • Incongruent results (Rips et al., 1972): – A pig is a mammal 1476 ms – A pig is an animal 1268 ms
Collins & Loftus’ Model • No more hierarchy • Shorter links between more connected concepts
(Dis)advantages of the model “A fairly complicated theory with enough generality to apply to results from many different experimental paradigms.” • This is bad. Why?
The model is unfalsifiable • The theory explains everything – How long should links be between nodes? Result B says nodes look like this Result A says nodes look like this
Everything is arbitrary • Cannot disprove theory: what does link length mean for the brain? • You can make connections as long as you want/need to explain your results
Outline • Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain
Connectionism is a new version of semantic network theories • McClelland & Rummelhart (1986) • Concepts are represented in networks with nodes and links – But they function a lot differently than in semantic networks • Theory is biologically based • A quick review of neurons…
Physiological Basis of Connectionism • Neural circuits: Processing happens between many neurons connected by synapses • Excitatory and inhibitory connections:
Physiological Basis of Connectionism • Strength of firing: Number of inputs onto a neuron (+ and -) determines rate of firing 1.5 0.2 Fires at 1.6 -0.75
Basics of Connectionism • Instead of nodes, you have units – Units are “neuronlike processing units” • Units are connected together • Parallel Distributed Processing (PDP) – Activation occurs in parallel – Processing occurs in many units
Basic PDP network Mental representation Processing Weights 5.6 From the environment
How a PDP network works • Give the network stimuli via the input units • Information is passed through the network by hidden units – Weights affect activation of nodes • Eventually, the stimulus is represented as a pattern via the output units
Example output • The brain represents things from the environment differently
PDP Learning: Stage 1 • Give it input, get output
Learning: Error signals • The output pattern is not the correct pattern • Figure out what the difference is – That difference is the error signal • Use the error signal to fine-tune weights • Error signal is sent back using back propagation
Learning : Stage 2 • Back propagate error signal through network, adjust weights 5.7 5.2
Learning: Stage 3, 4, 5… 1024 • Now that weights are adjusted, give network same input • Lather, rinse, repeat until error signal is 0
So this is learning? • Repeated input and back propagation changes weights between units • When error signal = 0, the network has learned the correct weights for that stimulus – The network has been trained
So where is the knowledge? • Semantic networks – One node has “canary” and is connected to “can fly” and “yellow” • PDP networks – A bunch of nodes together represent “canary” and another bunch represent “yellow” – Distributed knowledge in neural circuits
PDP: The GoodNetworks based on neurons • All nodes can do is fire (they are dumb) • Knowledge is distributed amongst many nodes • Sounds a lot like neurons and the brain! • Emergence: Lots of little dumb things form one big smart thing
PDP: The GoodNetworks are damage-resistant • “Lesion” the network by taking out nodes • This damage does not totally take out the system – Graceful degradation • These networks can adapt to damage
PDP: The GoodLearning can be generalized • Related concepts should activate many of the same nodes – Robin and sparrow should share a lot of the same representation • PDP networks can emulate this – similar inputs can operate with similar networks
PDP: The GoodSuccessful computer models • Not just a theory, but can be programmed in a computer • Computational modeling of the mind – Object perception – Recognizing words
PDP: The BadCannot explain everything • More complex tasks cannot be explained – Problem solving – Language processing • Limitation of computers? – We have trillions of neurons – PDP networks can’t support that many nodes (yet)
PDP: The BadRetroactive interference • Learning something new interferes with something already learned Example: Train network on “collie” – Weights are perfectly adjusted for collie • Give network “terrier” – Network must change weights again for terrier • Weights must change to accommodate both dogs
PDP: The BadCannot explain rapid learning • It does not take thousands of trials to remember that you parked in Lot K – How does rapid learning occur? • Two separate systems?
How the connectionists explain rapid learning • Two separate systems PDP in the Cortex: Rapid learning in the Hippocampus:
Outline • Approaches to Categorization – Definitions – Prototypes – Exemplars • Is there a special level of category? • Semantic Networks • Connectionism • Categories in the brain
Categories in the brain • Imaging studies have localized face and house areas – Still not very exciting (“light-up” studies) • Does this mean one brain area processes houses, another one for heads, and chairs, and technology, etc. etc.?
Visual agnosia for categories • Damage to inferior temporal cortex causes inability to name certain objects – Visual agnosia • Double dissociation for living/nonliving things
Double Dissociation • Double dissociation for living/nonliving things
Living vs. Non-living? • fMRI studies have shown different brain areas for living and non-living things • There is a lot of overlap for the two areas, though • Damage for categories is not well understood
Category-specific neurons • Some neurons only respond to certain categories • A “Bill Clinton” neuron? Probably not. • A “Bill Clinton” neural circuit? More likely.
Not categories, but continuum • There are probably no distinct face, house, chair, etc. areas in the brain • But everything’s not all stored in the same place, either • A mix of overlapping areas and distributed processes – Living vs. non-living is a big distinction