440 likes | 563 Views
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”. Information processing. Mental representations. Transformations. Perception / sensation. Action. Key questions:. What are the mental representations? What are the transformations and how do they work?
E N D
Information processing Mental representations Transformations Perception / sensation Action
Key questions: • What are the mental representations? • What are the transformations and how do they work? • Where do these come from?
Answers circa 1979 • Mental representations are data structures (symbols bound together in relational structures) • “Transformation” processes are rules, like the instructions in a computer program. • The rules and starting representations are largely innate • Learning is a process of building up new structured representations from primitives and deriving (and storing) new rules.
Their application is highly intuitive, logical, maybe even obvious! EG modular, serial processing in word recognition…
Explain generalization / generativity, central to human cognition! EG: Today I will frump the car, yesterday I….______________ the car. EG: Colorless green ideas sleep furiously!
Explains modular cognitive impairments! Pure alexia: Word representations gone! Prosopagnosia: Face reps gone! Category-specific impairment: Animals gone! Broca’s aphasia: Phonological word forms gone! And so on…
EG: Today I will meep the car, yesterday I….______________ the car.
Cognitive impairments not so modular… Pure alexia: Can’t see high spatial frequencies. Prosopagnosia: Abnormal visual word recognition. Category-specific impairment: Can still name half the animals. Broca’s aphasia: Can still produce single concrete nouns, can’t do grammar. And so on… But also Had a lot of Other Problems…
Some issues with modular, serial, symbolic approaches • Constraint satisfaction, context sensitivity • Quasi regularity • Learning and developmental change • Graceful degradation
Rumelhart • Brains are different than serial digital computers. • Brain: • Neurons are like little information processors • They are slow and intrinsically noisy… • …but there are many of them and they behave in parallel, not in serial. • It turns out that noisy, parallel computing systems are naturally suited to some of the kinds of behavioral tasks that challenged symbolic theories of the time.
In other words… • Rather than thinking of the mind as some kind of unconstrained “universal computer,” maybe we should pay attention to the kinds of computations a noisy, parallel, brain-like system can naturally conduct. • Paying attention to the implementation might offer some clues about / constraints on theories about how the mind works. • Added bonus: Such theories might offer a bridge between cognition and neuroscience!
Cell body Axon (transmits signals) Axon Terminal Dendrites (receive signals) Golgi Stain
+ - + - + + + + + + - - - - - - - - - - - - - + + + + + + +
+ - + - + - - - - + + +
p(Firing a spike) Hyperpolarized Depolarized Membrane potential at axon hillock
Input: Depends on activation of sending neurons and efficacy of synapses Output: Train of action potentials at a particular rate Weight: Effect on downstream neuron
Six elements of connectionist models: • A set of units • A weight matrix • An input function • A transfer function • A model environment • A learning rule
Six elements of connectionist models: • A set of units • Each unit is like a population of cells with a similar receptive field. • Think of all units in a model as a single vector, with each unit corresponding to one element of the vector. • At any point in time, each unit has an activation state analogous to the mean firing activity of the population of neurons. • These activation states are stored in an activation vector, with each element corresponding to one unit.
Output Output Hidden Hidden Input Input Bias [10.51 .52 .45] [10.51.52 .451]
Six elements of connectionist models: • A weight matrix • Each unit sends and receives a weighted connection to/from some other subset of units. • These weights are analogous to synapses: they are the means by which one units transmits information about its activation state to another unit. • Weights are stored in a weight matrix
Output Hidden Input Bias Receiving [10.51.52 .451] Sending
Six elements of connectionist models: • An input function • For any given receiving unit, there needs to be some way of determining how to combine weights and sending activations to determine the unit’s net input • This is almost always the dot product (ie weighted sum) of the sending activations and the weights.
Output Hidden Input Bias Receiving [10???? ??1] Sending
Output Hidden Input Bias Receiving [10???? ??1] Sending
Six elements of connectionist models: • A model environment • All the models do is compute activation states over units, given the preceding elements and some partial input. • The model environment specifies how events in the world are encoded in unit activation states, typically across a subset of units. • It consists of vectors that describe the input activations corresponding to different events, and sometimes the “target” activations that the network should generate for each input.
Output Hidden Input Bias X-OR function Input1 Input2 Hidden1 Hidden2 Output Input1 Input2 Hidden1 Hidden2 Output bias
Note that the model environment is always theoretically important! • It amounts to a theoretical statement about the nature of the information available to the system from perception and action or prior cognitive processing. • Many models sink or swim on the basis of their assumptions about the nature of inputs / outputs.
Six elements of connectionist models: • A learning rule • Only specified for models that learn (obviously) • Specifies how the values stored in the weight matrix should change as the network processes patterns • Many different varieties that we will see: • Hebbian • Error-correcting (e.g. backpropagation) • Competitive / self-organizing • Reinforcement-based
Output Hidden Input Bias X-OR function Input1 Input2 Hidden1 Hidden2 Output Input1 Input2 Hidden1 Hidden2 Output bias
Central challenge • Given just these elements, can we build little systems—models—that help us to understand human cognitive abilities, and their neural underpinnings?