1 / 44

PDP: Motivation, basic approach

PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”. Information processing. Mental representations. Transformations. Perception / sensation. Action. Key questions:. What are the mental representations? What are the transformations and how do they work?

Download Presentation

PDP: Motivation, basic approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PDP: Motivation, basic approach

  2. Cognitive psychologyor“How the Mind Works”

  3. Information processing Mental representations Transformations Perception / sensation Action

  4. Key questions: • What are the mental representations? • What are the transformations and how do they work? • Where do these come from?

  5. Answers circa 1979 • Mental representations are data structures (symbols bound together in relational structures) • “Transformation” processes are rules, like the instructions in a computer program. • The rules and starting representations are largely innate • Learning is a process of building up new structured representations from primitives and deriving (and storing) new rules.

  6. Why were (are?) these ideas so appealing?

  7. Their application is highly intuitive, logical, maybe even obvious! EG modular, serial processing in word recognition…

  8. Explain generalization / generativity, central to human cognition! EG: Today I will frump the car, yesterday I….______________ the car. EG: Colorless green ideas sleep furiously!

  9. Can build mechanistic, implemented models of behavior…

  10. Explains modular cognitive impairments! Pure alexia: Word representations gone! Prosopagnosia: Face reps gone! Category-specific impairment: Animals gone! Broca’s aphasia: Phonological word forms gone! And so on…

  11. So why isn’t this a class on symbolic cognitive modeling?

  12. W O R K

  13. EG: Today I will meep the car, yesterday I….______________ the car.

  14. Learning and development…

  15. Cognitive impairments not so modular… Pure alexia: Can’t see high spatial frequencies. Prosopagnosia: Abnormal visual word recognition. Category-specific impairment: Can still name half the animals. Broca’s aphasia: Can still produce single concrete nouns, can’t do grammar. And so on… But also Had a lot of Other Problems…

  16. Some issues with modular, serial, symbolic approaches • Constraint satisfaction, context sensitivity • Quasi regularity • Learning and developmental change • Graceful degradation

  17. Rumelhart • Brains are different than serial digital computers. • Brain: • Neurons are like little information processors • They are slow and intrinsically noisy… • …but there are many of them and they behave in parallel, not in serial. • It turns out that noisy, parallel computing systems are naturally suited to some of the kinds of behavioral tasks that challenged symbolic theories of the time.

  18. In other words… • Rather than thinking of the mind as some kind of unconstrained “universal computer,” maybe we should pay attention to the kinds of computations a noisy, parallel, brain-like system can naturally conduct. • Paying attention to the implementation might offer some clues about / constraints on theories about how the mind works. • Added bonus: Such theories might offer a bridge between cognition and neuroscience!

  19. Cell body Axon (transmits signals) Axon Terminal Dendrites (receive signals) Golgi Stain

  20. + - + - + + + + + + - - - - - - - - - - - - - + + + + + + +

  21. + - + - + - - - - + + +

  22. p(Firing a spike) Hyperpolarized Depolarized Membrane potential at axon hillock

  23. Input: Depends on activation of sending neurons and efficacy of synapses Output: Train of action potentials at a particular rate Weight: Effect on downstream neuron

  24. Six elements of connectionist models: • A set of units • A weight matrix • An input function • A transfer function • A model environment • A learning rule

  25. Six elements of connectionist models: • A set of units • Each unit is like a population of cells with a similar receptive field. • Think of all units in a model as a single vector, with each unit corresponding to one element of the vector. • At any point in time, each unit has an activation state analogous to the mean firing activity of the population of neurons. • These activation states are stored in an activation vector, with each element corresponding to one unit.

  26. Output Output Hidden Hidden Input Input Bias [10.51 .52 .45] [10.51.52 .451]

  27. Six elements of connectionist models: • A weight matrix • Each unit sends and receives a weighted connection to/from some other subset of units. • These weights are analogous to synapses: they are the means by which one units transmits information about its activation state to another unit. • Weights are stored in a weight matrix

  28. Output Hidden Input Bias Receiving [10.51.52 .451] Sending

  29. Six elements of connectionist models: • An input function • For any given receiving unit, there needs to be some way of determining how to combine weights and sending activations to determine the unit’s net input • This is almost always the dot product (ie weighted sum) of the sending activations and the weights.

  30. Output Hidden Input Bias Receiving [10???? ??1] Sending

  31. Six elements of connectionist models:

  32. Output Hidden Input Bias Receiving [10???? ??1] Sending

  33. Six elements of connectionist models: • A model environment • All the models do is compute activation states over units, given the preceding elements and some partial input. • The model environment specifies how events in the world are encoded in unit activation states, typically across a subset of units. • It consists of vectors that describe the input activations corresponding to different events, and sometimes the “target” activations that the network should generate for each input.

  34. Output Hidden Input Bias X-OR function Input1 Input2 Hidden1 Hidden2 Output Input1 Input2 Hidden1 Hidden2 Output bias

  35. Note that the model environment is always theoretically important! • It amounts to a theoretical statement about the nature of the information available to the system from perception and action or prior cognitive processing. • Many models sink or swim on the basis of their assumptions about the nature of inputs / outputs.

  36. Six elements of connectionist models: • A learning rule • Only specified for models that learn (obviously) • Specifies how the values stored in the weight matrix should change as the network processes patterns • Many different varieties that we will see: • Hebbian • Error-correcting (e.g. backpropagation) • Competitive / self-organizing • Reinforcement-based

  37. Output Hidden Input Bias X-OR function Input1 Input2 Hidden1 Hidden2 Output Input1 Input2 Hidden1 Hidden2 Output bias

  38. Central challenge • Given just these elements, can we build little systems—models—that help us to understand human cognitive abilities, and their neural underpinnings?

  39. One early example…

More Related